2025-06-01 03:00:47.281164 | Job console starting 2025-06-01 03:00:47.296309 | Updating git repos 2025-06-01 03:00:47.592613 | Cloning repos into workspace 2025-06-01 03:00:47.905199 | Restoring repo states 2025-06-01 03:00:47.929981 | Merging changes 2025-06-01 03:00:47.929998 | Checking out repos 2025-06-01 03:00:48.319873 | Preparing playbooks 2025-06-01 03:00:49.155044 | Running Ansible setup 2025-06-01 03:00:53.435412 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-01 03:00:54.056484 | 2025-06-01 03:00:54.056593 | PLAY [Base pre] 2025-06-01 03:00:54.072173 | 2025-06-01 03:00:54.072275 | TASK [Setup log path fact] 2025-06-01 03:00:54.099890 | orchestrator | ok 2025-06-01 03:00:54.126069 | 2025-06-01 03:00:54.126190 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-01 03:00:54.174616 | orchestrator | ok 2025-06-01 03:00:54.189954 | 2025-06-01 03:00:54.190052 | TASK [emit-job-header : Print job information] 2025-06-01 03:00:54.250276 | # Job Information 2025-06-01 03:00:54.250413 | Ansible Version: 2.16.14 2025-06-01 03:00:54.250453 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-06-01 03:00:54.250481 | Pipeline: periodic-daily 2025-06-01 03:00:54.250500 | Executor: 521e9411259a 2025-06-01 03:00:54.250518 | Triggered by: https://github.com/osism/testbed 2025-06-01 03:00:54.250536 | Event ID: 9b9c1124be9e4c1f8b1d7bdde556ef5c 2025-06-01 03:00:54.255926 | 2025-06-01 03:00:54.256009 | LOOP [emit-job-header : Print node information] 2025-06-01 03:00:54.346994 | orchestrator | ok: 2025-06-01 03:00:54.347159 | orchestrator | # Node Information 2025-06-01 03:00:54.347192 | orchestrator | Inventory Hostname: orchestrator 2025-06-01 03:00:54.347217 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-01 03:00:54.347240 | orchestrator | Username: zuul-testbed02 2025-06-01 03:00:54.347260 | orchestrator | Distro: Debian 12.11 2025-06-01 03:00:54.347284 | orchestrator | Provider: static-testbed 2025-06-01 03:00:54.347305 | orchestrator | Region: 2025-06-01 03:00:54.347326 | orchestrator | Label: testbed-orchestrator 2025-06-01 03:00:54.347345 | orchestrator | Product Name: OpenStack Nova 2025-06-01 03:00:54.347364 | orchestrator | Interface IP: 81.163.193.140 2025-06-01 03:00:54.358372 | 2025-06-01 03:00:54.358486 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-01 03:00:54.838181 | orchestrator -> localhost | changed 2025-06-01 03:00:54.847102 | 2025-06-01 03:00:54.847215 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-01 03:00:55.906669 | orchestrator -> localhost | changed 2025-06-01 03:00:55.921406 | 2025-06-01 03:00:55.921525 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-01 03:00:56.185178 | orchestrator -> localhost | ok 2025-06-01 03:00:56.190989 | 2025-06-01 03:00:56.191080 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-01 03:00:56.222642 | orchestrator | ok 2025-06-01 03:00:56.254237 | orchestrator | included: /var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-01 03:00:56.266937 | 2025-06-01 03:00:56.267085 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-01 03:00:57.662179 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-01 03:00:57.662416 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/work/980ddd066ddd4088882f2d78fb6ced5e_id_rsa 2025-06-01 03:00:57.662500 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/work/980ddd066ddd4088882f2d78fb6ced5e_id_rsa.pub 2025-06-01 03:00:57.662533 | orchestrator -> localhost | The key fingerprint is: 2025-06-01 03:00:57.662562 | orchestrator -> localhost | SHA256:Rc4kYVEkrmg/jhhFqE+czZlQ4OPDdAMqZrKWZKVyzkE zuul-build-sshkey 2025-06-01 03:00:57.662589 | orchestrator -> localhost | The key's randomart image is: 2025-06-01 03:00:57.662624 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-01 03:00:57.662648 | orchestrator -> localhost | | Eo. *== | 2025-06-01 03:00:57.662671 | orchestrator -> localhost | | +o+ o B | 2025-06-01 03:00:57.662693 | orchestrator -> localhost | |=*O + . + | 2025-06-01 03:00:57.662715 | orchestrator -> localhost | |O@.X = . . | 2025-06-01 03:00:57.662748 | orchestrator -> localhost | |ooX O . S | 2025-06-01 03:00:57.662789 | orchestrator -> localhost | |.o + . | 2025-06-01 03:00:57.662825 | orchestrator -> localhost | | o o | 2025-06-01 03:00:57.662867 | orchestrator -> localhost | | o o . | 2025-06-01 03:00:57.662891 | orchestrator -> localhost | | . . . | 2025-06-01 03:00:57.662913 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-01 03:00:57.662977 | orchestrator -> localhost | ok: Runtime: 0:00:00.925562 2025-06-01 03:00:57.671570 | 2025-06-01 03:00:57.671782 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-01 03:00:57.692355 | orchestrator | ok 2025-06-01 03:00:57.704107 | orchestrator | included: /var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-01 03:00:57.719207 | 2025-06-01 03:00:57.719305 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-01 03:00:57.758387 | orchestrator | skipping: Conditional result was False 2025-06-01 03:00:57.771598 | 2025-06-01 03:00:57.771728 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-01 03:00:58.359410 | orchestrator | changed 2025-06-01 03:00:58.365982 | 2025-06-01 03:00:58.366072 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-01 03:00:58.622548 | orchestrator | ok 2025-06-01 03:00:58.628697 | 2025-06-01 03:00:58.628788 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-01 03:00:59.055138 | orchestrator | ok 2025-06-01 03:00:59.062776 | 2025-06-01 03:00:59.062908 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-01 03:00:59.473464 | orchestrator | ok 2025-06-01 03:00:59.484642 | 2025-06-01 03:00:59.484767 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-01 03:00:59.510015 | orchestrator | skipping: Conditional result was False 2025-06-01 03:00:59.517048 | 2025-06-01 03:00:59.517159 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-01 03:01:00.184026 | orchestrator -> localhost | changed 2025-06-01 03:01:00.199646 | 2025-06-01 03:01:00.199780 | TASK [add-build-sshkey : Add back temp key] 2025-06-01 03:01:00.665489 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/work/980ddd066ddd4088882f2d78fb6ced5e_id_rsa (zuul-build-sshkey) 2025-06-01 03:01:00.665849 | orchestrator -> localhost | ok: Runtime: 0:00:00.018084 2025-06-01 03:01:00.674047 | 2025-06-01 03:01:00.674175 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-01 03:01:01.258945 | orchestrator | ok 2025-06-01 03:01:01.277767 | 2025-06-01 03:01:01.277961 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-01 03:01:01.319349 | orchestrator | skipping: Conditional result was False 2025-06-01 03:01:01.394548 | 2025-06-01 03:01:01.394694 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-01 03:01:01.859399 | orchestrator | ok 2025-06-01 03:01:01.883203 | 2025-06-01 03:01:01.883348 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-01 03:01:01.941409 | orchestrator | ok 2025-06-01 03:01:01.957183 | 2025-06-01 03:01:01.957319 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-01 03:01:02.354534 | orchestrator -> localhost | ok 2025-06-01 03:01:02.363168 | 2025-06-01 03:01:02.363288 | TASK [validate-host : Collect information about the host] 2025-06-01 03:01:03.622878 | orchestrator | ok 2025-06-01 03:01:03.648058 | 2025-06-01 03:01:03.648195 | TASK [validate-host : Sanitize hostname] 2025-06-01 03:01:03.721784 | orchestrator | ok 2025-06-01 03:01:03.728086 | 2025-06-01 03:01:03.728205 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-01 03:01:04.310299 | orchestrator -> localhost | changed 2025-06-01 03:01:04.317254 | 2025-06-01 03:01:04.317385 | TASK [validate-host : Collect information about zuul worker] 2025-06-01 03:01:04.754321 | orchestrator | ok 2025-06-01 03:01:04.765988 | 2025-06-01 03:01:04.766256 | TASK [validate-host : Write out all zuul information for each host] 2025-06-01 03:01:05.354189 | orchestrator -> localhost | changed 2025-06-01 03:01:05.374099 | 2025-06-01 03:01:05.374237 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-01 03:01:05.636195 | orchestrator | ok 2025-06-01 03:01:05.642413 | 2025-06-01 03:01:05.642561 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-01 03:01:23.544108 | orchestrator | changed: 2025-06-01 03:01:23.544316 | orchestrator | .d..t...... src/ 2025-06-01 03:01:23.544352 | orchestrator | .d..t...... src/github.com/ 2025-06-01 03:01:23.544378 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-01 03:01:23.544400 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-01 03:01:23.544421 | orchestrator | RedHat.yml 2025-06-01 03:01:23.575178 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-01 03:01:23.575197 | orchestrator | RedHat.yml 2025-06-01 03:01:23.575252 | orchestrator | = 1.53.0"... 2025-06-01 03:01:41.557332 | orchestrator | 03:01:41.557 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-01 03:01:43.140627 | orchestrator | 03:01:43.140 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-01 03:01:44.153166 | orchestrator | 03:01:44.152 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-01 03:01:45.599682 | orchestrator | 03:01:45.599 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-01 03:01:46.703724 | orchestrator | 03:01:46.703 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-01 03:01:48.186760 | orchestrator | 03:01:48.186 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-01 03:01:49.469630 | orchestrator | 03:01:49.468 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-01 03:01:49.469739 | orchestrator | 03:01:49.469 STDOUT terraform: Providers are signed by their developers. 2025-06-01 03:01:49.469749 | orchestrator | 03:01:49.469 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-01 03:01:49.469757 | orchestrator | 03:01:49.469 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-01 03:01:49.469764 | orchestrator | 03:01:49.469 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-01 03:01:49.469777 | orchestrator | 03:01:49.469 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-01 03:01:49.469787 | orchestrator | 03:01:49.469 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-01 03:01:49.469794 | orchestrator | 03:01:49.469 STDOUT terraform: you run "tofu init" in the future. 2025-06-01 03:01:49.474037 | orchestrator | 03:01:49.473 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-01 03:01:49.474089 | orchestrator | 03:01:49.473 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-01 03:01:49.474099 | orchestrator | 03:01:49.473 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-01 03:01:49.474114 | orchestrator | 03:01:49.473 STDOUT terraform: should now work. 2025-06-01 03:01:49.474122 | orchestrator | 03:01:49.474 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-01 03:01:49.474232 | orchestrator | 03:01:49.474 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-01 03:01:49.474284 | orchestrator | 03:01:49.474 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-01 03:01:49.654146 | orchestrator | 03:01:49.653 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-01 03:01:49.845386 | orchestrator | 03:01:49.845 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-01 03:01:49.845475 | orchestrator | 03:01:49.845 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-01 03:01:49.845654 | orchestrator | 03:01:49.845 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-01 03:01:49.845704 | orchestrator | 03:01:49.845 STDOUT terraform: for this configuration. 2025-06-01 03:01:50.094544 | orchestrator | 03:01:50.094 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-01 03:01:50.213851 | orchestrator | 03:01:50.213 STDOUT terraform: ci.auto.tfvars 2025-06-01 03:01:50.216452 | orchestrator | 03:01:50.216 STDOUT terraform: default_custom.tf 2025-06-01 03:01:50.435302 | orchestrator | 03:01:50.435 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-01 03:01:51.359715 | orchestrator | 03:01:51.359 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-01 03:01:51.901899 | orchestrator | 03:01:51.901 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-01 03:01:52.094542 | orchestrator | 03:01:52.092 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-01 03:01:52.094634 | orchestrator | 03:01:52.092 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-01 03:01:52.094648 | orchestrator | 03:01:52.093 STDOUT terraform:  + create 2025-06-01 03:01:52.094660 | orchestrator | 03:01:52.093 STDOUT terraform:  <= read (data resources) 2025-06-01 03:01:52.094672 | orchestrator | 03:01:52.093 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-01 03:01:52.094684 | orchestrator | 03:01:52.093 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-01 03:01:52.094694 | orchestrator | 03:01:52.093 STDOUT terraform:  # (config refers to values not yet known) 2025-06-01 03:01:52.094704 | orchestrator | 03:01:52.093 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-01 03:01:52.094714 | orchestrator | 03:01:52.093 STDOUT terraform:  + checksum = (known after apply) 2025-06-01 03:01:52.094723 | orchestrator | 03:01:52.093 STDOUT terraform:  + created_at = (known after apply) 2025-06-01 03:01:52.094733 | orchestrator | 03:01:52.093 STDOUT terraform:  + file = (known after apply) 2025-06-01 03:01:52.094743 | orchestrator | 03:01:52.093 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.094753 | orchestrator | 03:01:52.093 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.094763 | orchestrator | 03:01:52.093 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-01 03:01:52.094772 | orchestrator | 03:01:52.093 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-01 03:01:52.094782 | orchestrator | 03:01:52.093 STDOUT terraform:  + most_recent = true 2025-06-01 03:01:52.094812 | orchestrator | 03:01:52.093 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.094822 | orchestrator | 03:01:52.093 STDOUT terraform:  + protected = (known after apply) 2025-06-01 03:01:52.094832 | orchestrator | 03:01:52.093 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.094841 | orchestrator | 03:01:52.093 STDOUT terraform:  + schema = (known after apply) 2025-06-01 03:01:52.094851 | orchestrator | 03:01:52.094 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-01 03:01:52.094862 | orchestrator | 03:01:52.094 STDOUT terraform:  + tags = (known after apply) 2025-06-01 03:01:52.094872 | orchestrator | 03:01:52.094 STDOUT terraform:  + updated_at = (known after apply) 2025-06-01 03:01:52.094881 | orchestrator | 03:01:52.094 STDOUT terraform:  } 2025-06-01 03:01:52.094891 | orchestrator | 03:01:52.094 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-01 03:01:52.094901 | orchestrator | 03:01:52.094 STDOUT terraform:  # (config refers to values not yet known) 2025-06-01 03:01:52.094914 | orchestrator | 03:01:52.094 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-01 03:01:52.094936 | orchestrator | 03:01:52.094 STDOUT terraform:  + checksum = (known after apply) 2025-06-01 03:01:52.094947 | orchestrator | 03:01:52.094 STDOUT terraform:  + created_at = (known after apply) 2025-06-01 03:01:52.094956 | orchestrator | 03:01:52.094 STDOUT terraform:  + file = (known after apply) 2025-06-01 03:01:52.094966 | orchestrator | 03:01:52.094 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.094976 | orchestrator | 03:01:52.094 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.095005 | orchestrator | 03:01:52.094 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-01 03:01:52.095015 | orchestrator | 03:01:52.094 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-01 03:01:52.095025 | orchestrator | 03:01:52.094 STDOUT terraform:  + most_recent = true 2025-06-01 03:01:52.095035 | orchestrator | 03:01:52.094 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.095048 | orchestrator | 03:01:52.094 STDOUT terraform:  + protected = (known after apply) 2025-06-01 03:01:52.095059 | orchestrator | 03:01:52.094 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.095110 | orchestrator | 03:01:52.095 STDOUT terraform:  + schema = (known after apply) 2025-06-01 03:01:52.095212 | orchestrator | 03:01:52.095 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-01 03:01:52.095230 | orchestrator | 03:01:52.095 STDOUT terraform:  + tags = (known after apply) 2025-06-01 03:01:52.095308 | orchestrator | 03:01:52.095 STDOUT terraform:  + updated_at = (known after apply) 2025-06-01 03:01:52.095321 | orchestrator | 03:01:52.095 STDOUT terraform:  } 2025-06-01 03:01:52.095381 | orchestrator | 03:01:52.095 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-01 03:01:52.095420 | orchestrator | 03:01:52.095 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-01 03:01:52.095519 | orchestrator | 03:01:52.095 STDOUT terraform:  + content = (known after apply) 2025-06-01 03:01:52.095535 | orchestrator | 03:01:52.095 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 03:01:52.095625 | orchestrator | 03:01:52.095 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 03:01:52.095642 | orchestrator | 03:01:52.095 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 03:01:52.095726 | orchestrator | 03:01:52.095 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 03:01:52.095771 | orchestrator | 03:01:52.095 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 03:01:52.095837 | orchestrator | 03:01:52.095 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 03:01:52.095891 | orchestrator | 03:01:52.095 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 03:01:52.095907 | orchestrator | 03:01:52.095 STDOUT terraform:  + file_permission = "0644" 2025-06-01 03:01:52.095974 | orchestrator | 03:01:52.095 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-01 03:01:52.096060 | orchestrator | 03:01:52.095 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.096077 | orchestrator | 03:01:52.096 STDOUT terraform:  } 2025-06-01 03:01:52.096187 | orchestrator | 03:01:52.096 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-01 03:01:52.096219 | orchestrator | 03:01:52.096 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-01 03:01:52.096320 | orchestrator | 03:01:52.096 STDOUT terraform:  + content = (known after apply) 2025-06-01 03:01:52.096337 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 03:01:52.096402 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 03:01:52.096467 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 03:01:52.096520 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 03:01:52.096585 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 03:01:52.096671 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 03:01:52.096683 | orchestrator | 03:01:52.096 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 03:01:52.096719 | orchestrator | 03:01:52.096 STDOUT terraform:  + file_permission = "0644" 2025-06-01 03:01:52.096771 | orchestrator | 03:01:52.096 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-01 03:01:52.096867 | orchestrator | 03:01:52.096 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.096880 | orchestrator | 03:01:52.096 STDOUT terraform:  } 2025-06-01 03:01:52.096893 | orchestrator | 03:01:52.096 STDOUT terraform:  # local_file.inventory will be created 2025-06-01 03:01:52.096965 | orchestrator | 03:01:52.096 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-01 03:01:52.096981 | orchestrator | 03:01:52.096 STDOUT terraform:  + content = (known after apply) 2025-06-01 03:01:52.097068 | orchestrator | 03:01:52.096 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 03:01:52.097109 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 03:01:52.097203 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 03:01:52.097246 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 03:01:52.097301 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 03:01:52.097387 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 03:01:52.097398 | orchestrator | 03:01:52.097 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 03:01:52.097482 | orchestrator | 03:01:52.097 STDOUT terraform:  + file_permission = "0644" 2025-06-01 03:01:52.097504 | orchestrator | 03:01:52.097 STDOUT terraform:  + filename = "inventory.ci" 2025-06-01 03:01:52.097576 | orchestrator | 03:01:52.097 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.097586 | orchestrator | 03:01:52.097 STDOUT terraform:  } 2025-06-01 03:01:52.097629 | orchestrator | 03:01:52.097 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-01 03:01:52.097685 | orchestrator | 03:01:52.097 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-01 03:01:52.097766 | orchestrator | 03:01:52.097 STDOUT terraform:  + content = (sensitive value) 2025-06-01 03:01:52.097863 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 03:01:52.097899 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 03:01:52.097962 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 03:01:52.098067 | orchestrator | 03:01:52.097 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 03:01:52.098153 | orchestrator | 03:01:52.098 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 03:01:52.098191 | orchestrator | 03:01:52.098 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 03:01:52.098250 | orchestrator | 03:01:52.098 STDOUT terraform:  + directory_permission = "0700" 2025-06-01 03:01:52.098279 | orchestrator | 03:01:52.098 STDOUT terraform:  + file_permission = "0600" 2025-06-01 03:01:52.098341 | orchestrator | 03:01:52.098 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-01 03:01:52.098392 | orchestrator | 03:01:52.098 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.098405 | orchestrator | 03:01:52.098 STDOUT terraform:  } 2025-06-01 03:01:52.098467 | orchestrator | 03:01:52.098 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-01 03:01:52.098528 | orchestrator | 03:01:52.098 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-01 03:01:52.098586 | orchestrator | 03:01:52.098 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.098615 | orchestrator | 03:01:52.098 STDOUT terraform:  } 2025-06-01 03:01:52.098715 | orchestrator | 03:01:52.098 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-01 03:01:52.098785 | orchestrator | 03:01:52.098 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-01 03:01:52.098847 | orchestrator | 03:01:52.098 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.098891 | orchestrator | 03:01:52.098 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.098952 | orchestrator | 03:01:52.098 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.099186 | orchestrator | 03:01:52.098 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.099280 | orchestrator | 03:01:52.099 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.099309 | orchestrator | 03:01:52.099 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-01 03:01:52.099343 | orchestrator | 03:01:52.099 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.099355 | orchestrator | 03:01:52.099 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.099367 | orchestrator | 03:01:52.099 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.099378 | orchestrator | 03:01:52.099 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.099394 | orchestrator | 03:01:52.099 STDOUT terraform:  } 2025-06-01 03:01:52.099409 | orchestrator | 03:01:52.099 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-01 03:01:52.099492 | orchestrator | 03:01:52.099 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 03:01:52.099548 | orchestrator | 03:01:52.099 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.099589 | orchestrator | 03:01:52.099 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.099651 | orchestrator | 03:01:52.099 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.099714 | orchestrator | 03:01:52.099 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.099773 | orchestrator | 03:01:52.099 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.099850 | orchestrator | 03:01:52.099 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-01 03:01:52.099910 | orchestrator | 03:01:52.099 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.099939 | orchestrator | 03:01:52.099 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.100024 | orchestrator | 03:01:52.099 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.100056 | orchestrator | 03:01:52.099 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.100072 | orchestrator | 03:01:52.100 STDOUT terraform:  } 2025-06-01 03:01:52.100192 | orchestrator | 03:01:52.100 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-01 03:01:52.100239 | orchestrator | 03:01:52.100 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 03:01:52.100307 | orchestrator | 03:01:52.100 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.100324 | orchestrator | 03:01:52.100 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.100400 | orchestrator | 03:01:52.100 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.100458 | orchestrator | 03:01:52.100 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.100515 | orchestrator | 03:01:52.100 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.100587 | orchestrator | 03:01:52.100 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-01 03:01:52.100645 | orchestrator | 03:01:52.100 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.100662 | orchestrator | 03:01:52.100 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.100711 | orchestrator | 03:01:52.100 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.100749 | orchestrator | 03:01:52.100 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.100765 | orchestrator | 03:01:52.100 STDOUT terraform:  } 2025-06-01 03:01:52.100842 | orchestrator | 03:01:52.100 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-01 03:01:52.100914 | orchestrator | 03:01:52.100 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 03:01:52.100971 | orchestrator | 03:01:52.100 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.101012 | orchestrator | 03:01:52.100 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.101080 | orchestrator | 03:01:52.101 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.101137 | orchestrator | 03:01:52.101 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.101194 | orchestrator | 03:01:52.101 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.101265 | orchestrator | 03:01:52.101 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-01 03:01:52.101321 | orchestrator | 03:01:52.101 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.101359 | orchestrator | 03:01:52.101 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.101397 | orchestrator | 03:01:52.101 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.101435 | orchestrator | 03:01:52.101 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.101451 | orchestrator | 03:01:52.101 STDOUT terraform:  } 2025-06-01 03:01:52.101555 | orchestrator | 03:01:52.101 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-01 03:01:52.101597 | orchestrator | 03:01:52.101 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 03:01:52.101655 | orchestrator | 03:01:52.101 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.101673 | orchestrator | 03:01:52.101 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.101773 | orchestrator | 03:01:52.101 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.101791 | orchestrator | 03:01:52.101 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.101848 | orchestrator | 03:01:52.101 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.101921 | orchestrator | 03:01:52.101 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-01 03:01:52.101978 | orchestrator | 03:01:52.101 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.102129 | orchestrator | 03:01:52.101 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.102181 | orchestrator | 03:01:52.102 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.102221 | orchestrator | 03:01:52.102 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.102237 | orchestrator | 03:01:52.102 STDOUT terraform:  } 2025-06-01 03:01:52.102307 | orchestrator | 03:01:52.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-01 03:01:52.102375 | orchestrator | 03:01:52.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 03:01:52.102425 | orchestrator | 03:01:52.102 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.102462 | orchestrator | 03:01:52.102 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.102521 | orchestrator | 03:01:52.102 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.102577 | orchestrator | 03:01:52.102 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.102630 | orchestrator | 03:01:52.102 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.102745 | orchestrator | 03:01:52.102 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-01 03:01:52.102799 | orchestrator | 03:01:52.102 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.102816 | orchestrator | 03:01:52.102 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.102869 | orchestrator | 03:01:52.102 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.102887 | orchestrator | 03:01:52.102 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.102901 | orchestrator | 03:01:52.102 STDOUT terraform:  } 2025-06-01 03:01:52.102973 | orchestrator | 03:01:52.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-01 03:01:52.103055 | orchestrator | 03:01:52.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 03:01:52.103105 | orchestrator | 03:01:52.103 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.103132 | orchestrator | 03:01:52.103 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.103189 | orchestrator | 03:01:52.103 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.103240 | orchestrator | 03:01:52.103 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.103291 | orchestrator | 03:01:52.103 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.103354 | orchestrator | 03:01:52.103 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-01 03:01:52.103406 | orchestrator | 03:01:52.103 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.103457 | orchestrator | 03:01:52.103 STDOUT terraform:  + size = 80 2025-06-01 03:01:52.103495 | orchestrator | 03:01:52.103 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.103532 | orchestrator | 03:01:52.103 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.103548 | orchestrator | 03:01:52.103 STDOUT terraform:  } 2025-06-01 03:01:52.103607 | orchestrator | 03:01:52.103 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-01 03:01:52.103668 | orchestrator | 03:01:52.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.103718 | orchestrator | 03:01:52.103 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.103756 | orchestrator | 03:01:52.103 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.103803 | orchestrator | 03:01:52.103 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.103857 | orchestrator | 03:01:52.103 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.103911 | orchestrator | 03:01:52.103 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-01 03:01:52.103964 | orchestrator | 03:01:52.103 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.103980 | orchestrator | 03:01:52.103 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.104032 | orchestrator | 03:01:52.103 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.104047 | orchestrator | 03:01:52.104 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.104061 | orchestrator | 03:01:52.104 STDOUT terraform:  } 2025-06-01 03:01:52.104259 | orchestrator | 03:01:52.104 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-01 03:01:52.104299 | orchestrator | 03:01:52.104 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.104314 | orchestrator | 03:01:52.104 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.104321 | orchestrator | 03:01:52.104 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.104356 | orchestrator | 03:01:52.104 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.104401 | orchestrator | 03:01:52.104 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.104455 | orchestrator | 03:01:52.104 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-01 03:01:52.104507 | orchestrator | 03:01:52.104 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.104532 | orchestrator | 03:01:52.104 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.104568 | orchestrator | 03:01:52.104 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.104607 | orchestrator | 03:01:52.104 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.104620 | orchestrator | 03:01:52.104 STDOUT terraform:  } 2025-06-01 03:01:52.104684 | orchestrator | 03:01:52.104 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-01 03:01:52.104744 | orchestrator | 03:01:52.104 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.104794 | orchestrator | 03:01:52.104 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.104824 | orchestrator | 03:01:52.104 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.104877 | orchestrator | 03:01:52.104 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.104928 | orchestrator | 03:01:52.104 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.104982 | orchestrator | 03:01:52.104 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-01 03:01:52.105048 | orchestrator | 03:01:52.104 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.105086 | orchestrator | 03:01:52.105 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.105103 | orchestrator | 03:01:52.105 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.105143 | orchestrator | 03:01:52.105 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.105155 | orchestrator | 03:01:52.105 STDOUT terraform:  } 2025-06-01 03:01:52.105221 | orchestrator | 03:01:52.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-01 03:01:52.105287 | orchestrator | 03:01:52.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.105338 | orchestrator | 03:01:52.105 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.105367 | orchestrator | 03:01:52.105 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.105419 | orchestrator | 03:01:52.105 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.105470 | orchestrator | 03:01:52.105 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.105526 | orchestrator | 03:01:52.105 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-01 03:01:52.105577 | orchestrator | 03:01:52.105 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.105606 | orchestrator | 03:01:52.105 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.105640 | orchestrator | 03:01:52.105 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.105652 | orchestrator | 03:01:52.105 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.105689 | orchestrator | 03:01:52.105 STDOUT terraform:  } 2025-06-01 03:01:52.105744 | orchestrator | 03:01:52.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-01 03:01:52.105805 | orchestrator | 03:01:52.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.105858 | orchestrator | 03:01:52.105 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.105888 | orchestrator | 03:01:52.105 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.105941 | orchestrator | 03:01:52.105 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.106004 | orchestrator | 03:01:52.105 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.106091 | orchestrator | 03:01:52.105 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-01 03:01:52.106144 | orchestrator | 03:01:52.106 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.106174 | orchestrator | 03:01:52.106 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.106208 | orchestrator | 03:01:52.106 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.106245 | orchestrator | 03:01:52.106 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.106264 | orchestrator | 03:01:52.106 STDOUT terraform:  } 2025-06-01 03:01:52.106323 | orchestrator | 03:01:52.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-01 03:01:52.106384 | orchestrator | 03:01:52.106 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.106436 | orchestrator | 03:01:52.106 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.106470 | orchestrator | 03:01:52.106 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.106528 | orchestrator | 03:01:52.106 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.106602 | orchestrator | 03:01:52.106 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.106657 | orchestrator | 03:01:52.106 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-01 03:01:52.106708 | orchestrator | 03:01:52.106 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.106739 | orchestrator | 03:01:52.106 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.106774 | orchestrator | 03:01:52.106 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.106811 | orchestrator | 03:01:52.106 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.106823 | orchestrator | 03:01:52.106 STDOUT terraform:  } 2025-06-01 03:01:52.106888 | orchestrator | 03:01:52.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-01 03:01:52.106950 | orchestrator | 03:01:52.106 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.107123 | orchestrator | 03:01:52.106 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.107166 | orchestrator | 03:01:52.107 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.107186 | orchestrator | 03:01:52.107 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.107199 | orchestrator | 03:01:52.107 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.107214 | orchestrator | 03:01:52.107 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-01 03:01:52.107261 | orchestrator | 03:01:52.107 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.107277 | orchestrator | 03:01:52.107 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.107328 | orchestrator | 03:01:52.107 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.107344 | orchestrator | 03:01:52.107 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.107357 | orchestrator | 03:01:52.107 STDOUT terraform:  } 2025-06-01 03:01:52.107425 | orchestrator | 03:01:52.107 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-01 03:01:52.107487 | orchestrator | 03:01:52.107 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.107538 | orchestrator | 03:01:52.107 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.107573 | orchestrator | 03:01:52.107 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.107625 | orchestrator | 03:01:52.107 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.107677 | orchestrator | 03:01:52.107 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.107731 | orchestrator | 03:01:52.107 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-01 03:01:52.107782 | orchestrator | 03:01:52.107 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.107799 | orchestrator | 03:01:52.107 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.107840 | orchestrator | 03:01:52.107 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.107866 | orchestrator | 03:01:52.107 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.107881 | orchestrator | 03:01:52.107 STDOUT terraform:  } 2025-06-01 03:01:52.107946 | orchestrator | 03:01:52.107 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-01 03:01:52.108021 | orchestrator | 03:01:52.107 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 03:01:52.108068 | orchestrator | 03:01:52.108 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 03:01:52.108095 | orchestrator | 03:01:52.108 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.108151 | orchestrator | 03:01:52.108 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.108201 | orchestrator | 03:01:52.108 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 03:01:52.108256 | orchestrator | 03:01:52.108 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-01 03:01:52.108309 | orchestrator | 03:01:52.108 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.108325 | orchestrator | 03:01:52.108 STDOUT terraform:  + size = 20 2025-06-01 03:01:52.108367 | orchestrator | 03:01:52.108 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 03:01:52.108405 | orchestrator | 03:01:52.108 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 03:01:52.108421 | orchestrator | 03:01:52.108 STDOUT terraform:  } 2025-06-01 03:01:52.108480 | orchestrator | 03:01:52.108 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-01 03:01:52.108541 | orchestrator | 03:01:52.108 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-01 03:01:52.108590 | orchestrator | 03:01:52.108 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.108639 | orchestrator | 03:01:52.108 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.108687 | orchestrator | 03:01:52.108 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.108740 | orchestrator | 03:01:52.108 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.108777 | orchestrator | 03:01:52.108 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.108794 | orchestrator | 03:01:52.108 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.108847 | orchestrator | 03:01:52.108 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.108898 | orchestrator | 03:01:52.108 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.108943 | orchestrator | 03:01:52.108 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-01 03:01:52.108980 | orchestrator | 03:01:52.108 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.109057 | orchestrator | 03:01:52.108 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.109103 | orchestrator | 03:01:52.109 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.109159 | orchestrator | 03:01:52.109 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.109204 | orchestrator | 03:01:52.109 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.109231 | orchestrator | 03:01:52.109 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.109279 | orchestrator | 03:01:52.109 STDOUT terraform:  + name = "testbed-manager" 2025-06-01 03:01:52.109305 | orchestrator | 03:01:52.109 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.109361 | orchestrator | 03:01:52.109 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.109446 | orchestrator | 03:01:52.109 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.109461 | orchestrator | 03:01:52.109 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.109476 | orchestrator | 03:01:52.109 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.109539 | orchestrator | 03:01:52.109 STDOUT terraform:  + user_data = (known after apply) 2025-06-01 03:01:52.109556 | orchestrator | 03:01:52.109 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.109604 | orchestrator | 03:01:52.109 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.109621 | orchestrator | 03:01:52.109 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.109671 | orchestrator | 03:01:52.109 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.109720 | orchestrator | 03:01:52.109 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.109737 | orchestrator | 03:01:52.109 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.109804 | orchestrator | 03:01:52.109 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.109820 | orchestrator | 03:01:52.109 STDOUT terraform:  } 2025-06-01 03:01:52.109832 | orchestrator | 03:01:52.109 STDOUT terraform:  + network { 2025-06-01 03:01:52.109846 | orchestrator | 03:01:52.109 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.109902 | orchestrator | 03:01:52.109 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.109941 | orchestrator | 03:01:52.109 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.110054 | orchestrator | 03:01:52.109 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.110073 | orchestrator | 03:01:52.109 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.110088 | orchestrator | 03:01:52.110 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.110221 | orchestrator | 03:01:52.110 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.110263 | orchestrator | 03:01:52.110 STDOUT terraform:  } 2025-06-01 03:01:52.110274 | orchestrator | 03:01:52.110 STDOUT terraform:  } 2025-06-01 03:01:52.110289 | orchestrator | 03:01:52.110 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-01 03:01:52.110298 | orchestrator | 03:01:52.110 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 03:01:52.110335 | orchestrator | 03:01:52.110 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.110383 | orchestrator | 03:01:52.110 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.110435 | orchestrator | 03:01:52.110 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.110485 | orchestrator | 03:01:52.110 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.110520 | orchestrator | 03:01:52.110 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.110546 | orchestrator | 03:01:52.110 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.110598 | orchestrator | 03:01:52.110 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.110650 | orchestrator | 03:01:52.110 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.110691 | orchestrator | 03:01:52.110 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 03:01:52.110718 | orchestrator | 03:01:52.110 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.110768 | orchestrator | 03:01:52.110 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.110808 | orchestrator | 03:01:52.110 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.113479 | orchestrator | 03:01:52.110 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.113524 | orchestrator | 03:01:52.113 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.113576 | orchestrator | 03:01:52.113 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.113608 | orchestrator | 03:01:52.113 STDOUT terraform:  + name = "testbed-node-0" 2025-06-01 03:01:52.113652 | orchestrator | 03:01:52.113 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.113717 | orchestrator | 03:01:52.113 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.113729 | orchestrator | 03:01:52.113 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.113770 | orchestrator | 03:01:52.113 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.113820 | orchestrator | 03:01:52.113 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.113892 | orchestrator | 03:01:52.113 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 03:01:52.113905 | orchestrator | 03:01:52.113 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.113945 | orchestrator | 03:01:52.113 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.114030 | orchestrator | 03:01:52.113 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.114184 | orchestrator | 03:01:52.114 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.114246 | orchestrator | 03:01:52.114 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.114260 | orchestrator | 03:01:52.114 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.114277 | orchestrator | 03:01:52.114 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.114287 | orchestrator | 03:01:52.114 STDOUT terraform:  } 2025-06-01 03:01:52.114298 | orchestrator | 03:01:52.114 STDOUT terraform:  + network { 2025-06-01 03:01:52.114308 | orchestrator | 03:01:52.114 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.114322 | orchestrator | 03:01:52.114 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.114332 | orchestrator | 03:01:52.114 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.114370 | orchestrator | 03:01:52.114 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.114405 | orchestrator | 03:01:52.114 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.114439 | orchestrator | 03:01:52.114 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.114483 | orchestrator | 03:01:52.114 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.114498 | orchestrator | 03:01:52.114 STDOUT terraform:  } 2025-06-01 03:01:52.114509 | orchestrator | 03:01:52.114 STDOUT terraform:  } 2025-06-01 03:01:52.114566 | orchestrator | 03:01:52.114 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-01 03:01:52.114620 | orchestrator | 03:01:52.114 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 03:01:52.114656 | orchestrator | 03:01:52.114 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.114705 | orchestrator | 03:01:52.114 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.114764 | orchestrator | 03:01:52.114 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.114781 | orchestrator | 03:01:52.114 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.114830 | orchestrator | 03:01:52.114 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.114843 | orchestrator | 03:01:52.114 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.114880 | orchestrator | 03:01:52.114 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.114919 | orchestrator | 03:01:52.114 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.114969 | orchestrator | 03:01:52.114 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 03:01:52.115018 | orchestrator | 03:01:52.114 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.115035 | orchestrator | 03:01:52.114 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.115074 | orchestrator | 03:01:52.115 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.115112 | orchestrator | 03:01:52.115 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.115160 | orchestrator | 03:01:52.115 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.115186 | orchestrator | 03:01:52.115 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.115223 | orchestrator | 03:01:52.115 STDOUT terraform:  + name = "testbed-node-1" 2025-06-01 03:01:52.115239 | orchestrator | 03:01:52.115 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.115292 | orchestrator | 03:01:52.115 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.115330 | orchestrator | 03:01:52.115 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.115346 | orchestrator | 03:01:52.115 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.115401 | orchestrator | 03:01:52.115 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.115464 | orchestrator | 03:01:52.115 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 03:01:52.115482 | orchestrator | 03:01:52.115 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.115497 | orchestrator | 03:01:52.115 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.115544 | orchestrator | 03:01:52.115 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.115561 | orchestrator | 03:01:52.115 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.115611 | orchestrator | 03:01:52.115 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.115635 | orchestrator | 03:01:52.115 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.115693 | orchestrator | 03:01:52.115 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.115707 | orchestrator | 03:01:52.115 STDOUT terraform:  } 2025-06-01 03:01:52.115721 | orchestrator | 03:01:52.115 STDOUT terraform:  + network { 2025-06-01 03:01:52.115736 | orchestrator | 03:01:52.115 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.115773 | orchestrator | 03:01:52.115 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.115811 | orchestrator | 03:01:52.115 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.115859 | orchestrator | 03:01:52.115 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.115875 | orchestrator | 03:01:52.115 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.115924 | orchestrator | 03:01:52.115 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.115963 | orchestrator | 03:01:52.115 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.115976 | orchestrator | 03:01:52.115 STDOUT terraform:  } 2025-06-01 03:01:52.116031 | orchestrator | 03:01:52.115 STDOUT terraform:  } 2025-06-01 03:01:52.116056 | orchestrator | 03:01:52.115 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-01 03:01:52.116099 | orchestrator | 03:01:52.116 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 03:01:52.116142 | orchestrator | 03:01:52.116 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.116181 | orchestrator | 03:01:52.116 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.116241 | orchestrator | 03:01:52.116 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.116258 | orchestrator | 03:01:52.116 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.116315 | orchestrator | 03:01:52.116 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.116332 | orchestrator | 03:01:52.116 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.116381 | orchestrator | 03:01:52.116 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.116420 | orchestrator | 03:01:52.116 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.116458 | orchestrator | 03:01:52.116 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 03:01:52.116474 | orchestrator | 03:01:52.116 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.116525 | orchestrator | 03:01:52.116 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.116564 | orchestrator | 03:01:52.116 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.116602 | orchestrator | 03:01:52.116 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.116651 | orchestrator | 03:01:52.116 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.116668 | orchestrator | 03:01:52.116 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.116718 | orchestrator | 03:01:52.116 STDOUT terraform:  + name = "testbed-node-2" 2025-06-01 03:01:52.116735 | orchestrator | 03:01:52.116 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.116789 | orchestrator | 03:01:52.116 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.116828 | orchestrator | 03:01:52.116 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.116844 | orchestrator | 03:01:52.116 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.116899 | orchestrator | 03:01:52.116 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.116961 | orchestrator | 03:01:52.116 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 03:01:52.116978 | orchestrator | 03:01:52.116 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.117019 | orchestrator | 03:01:52.116 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.117043 | orchestrator | 03:01:52.116 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.117072 | orchestrator | 03:01:52.117 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.117112 | orchestrator | 03:01:52.117 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.117161 | orchestrator | 03:01:52.117 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.117210 | orchestrator | 03:01:52.117 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.117223 | orchestrator | 03:01:52.117 STDOUT terraform:  } 2025-06-01 03:01:52.117238 | orchestrator | 03:01:52.117 STDOUT terraform:  + network { 2025-06-01 03:01:52.117250 | orchestrator | 03:01:52.117 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.117273 | orchestrator | 03:01:52.117 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.117320 | orchestrator | 03:01:52.117 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.117359 | orchestrator | 03:01:52.117 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.117375 | orchestrator | 03:01:52.117 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.117430 | orchestrator | 03:01:52.117 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.117479 | orchestrator | 03:01:52.117 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.117492 | orchestrator | 03:01:52.117 STDOUT terraform:  } 2025-06-01 03:01:52.117507 | orchestrator | 03:01:52.117 STDOUT terraform:  } 2025-06-01 03:01:52.117544 | orchestrator | 03:01:52.117 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-01 03:01:52.117601 | orchestrator | 03:01:52.117 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 03:01:52.117641 | orchestrator | 03:01:52.117 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.117679 | orchestrator | 03:01:52.117 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.117724 | orchestrator | 03:01:52.117 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.117775 | orchestrator | 03:01:52.117 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.117788 | orchestrator | 03:01:52.117 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.117803 | orchestrator | 03:01:52.117 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.117857 | orchestrator | 03:01:52.117 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.117897 | orchestrator | 03:01:52.117 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.117913 | orchestrator | 03:01:52.117 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 03:01:52.117960 | orchestrator | 03:01:52.117 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.118037 | orchestrator | 03:01:52.117 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.118082 | orchestrator | 03:01:52.117 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.118124 | orchestrator | 03:01:52.118 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.118164 | orchestrator | 03:01:52.118 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.118181 | orchestrator | 03:01:52.118 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.118316 | orchestrator | 03:01:52.118 STDOUT terraform:  + name = "testbed-node-3" 2025-06-01 03:01:52.118345 | orchestrator | 03:01:52.118 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.118354 | orchestrator | 03:01:52.118 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.118366 | orchestrator | 03:01:52.118 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.118373 | orchestrator | 03:01:52.118 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.118400 | orchestrator | 03:01:52.118 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.118443 | orchestrator | 03:01:52.118 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 03:01:52.118454 | orchestrator | 03:01:52.118 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.118483 | orchestrator | 03:01:52.118 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.118518 | orchestrator | 03:01:52.118 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.118553 | orchestrator | 03:01:52.118 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.118587 | orchestrator | 03:01:52.118 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.118622 | orchestrator | 03:01:52.118 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.118668 | orchestrator | 03:01:52.118 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.118679 | orchestrator | 03:01:52.118 STDOUT terraform:  } 2025-06-01 03:01:52.118703 | orchestrator | 03:01:52.118 STDOUT terraform:  + network { 2025-06-01 03:01:52.118732 | orchestrator | 03:01:52.118 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.118766 | orchestrator | 03:01:52.118 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.118803 | orchestrator | 03:01:52.118 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.118839 | orchestrator | 03:01:52.118 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.118876 | orchestrator | 03:01:52.118 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.118913 | orchestrator | 03:01:52.118 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.118948 | orchestrator | 03:01:52.118 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.118960 | orchestrator | 03:01:52.118 STDOUT terraform:  } 2025-06-01 03:01:52.118971 | orchestrator | 03:01:52.118 STDOUT terraform:  } 2025-06-01 03:01:52.119036 | orchestrator | 03:01:52.118 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-01 03:01:52.119101 | orchestrator | 03:01:52.119 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 03:01:52.119144 | orchestrator | 03:01:52.119 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.119185 | orchestrator | 03:01:52.119 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.119225 | orchestrator | 03:01:52.119 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.119269 | orchestrator | 03:01:52.119 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.119298 | orchestrator | 03:01:52.119 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.119311 | orchestrator | 03:01:52.119 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.119360 | orchestrator | 03:01:52.119 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.119402 | orchestrator | 03:01:52.119 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.119437 | orchestrator | 03:01:52.119 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 03:01:52.119457 | orchestrator | 03:01:52.119 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.119550 | orchestrator | 03:01:52.119 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.119632 | orchestrator | 03:01:52.119 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.119685 | orchestrator | 03:01:52.119 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.119729 | orchestrator | 03:01:52.119 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.119759 | orchestrator | 03:01:52.119 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.119788 | orchestrator | 03:01:52.119 STDOUT terraform:  + name = "testbed-node-4" 2025-06-01 03:01:52.119817 | orchestrator | 03:01:52.119 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.119858 | orchestrator | 03:01:52.119 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.119898 | orchestrator | 03:01:52.119 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.119911 | orchestrator | 03:01:52.119 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.119962 | orchestrator | 03:01:52.119 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.120033 | orchestrator | 03:01:52.119 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 03:01:52.120046 | orchestrator | 03:01:52.120 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.120075 | orchestrator | 03:01:52.120 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.120112 | orchestrator | 03:01:52.120 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.120124 | orchestrator | 03:01:52.120 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.120172 | orchestrator | 03:01:52.120 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.120185 | orchestrator | 03:01:52.120 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.120240 | orchestrator | 03:01:52.120 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.120253 | orchestrator | 03:01:52.120 STDOUT terraform:  } 2025-06-01 03:01:52.120262 | orchestrator | 03:01:52.120 STDOUT terraform:  + network { 2025-06-01 03:01:52.120272 | orchestrator | 03:01:52.120 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.120319 | orchestrator | 03:01:52.120 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.120349 | orchestrator | 03:01:52.120 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.120388 | orchestrator | 03:01:52.120 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.120424 | orchestrator | 03:01:52.120 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.120462 | orchestrator | 03:01:52.120 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.120498 | orchestrator | 03:01:52.120 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.120517 | orchestrator | 03:01:52.120 STDOUT terraform:  } 2025-06-01 03:01:52.120525 | orchestrator | 03:01:52.120 STDOUT terraform:  } 2025-06-01 03:01:52.120573 | orchestrator | 03:01:52.120 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-01 03:01:52.120621 | orchestrator | 03:01:52.120 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 03:01:52.120660 | orchestrator | 03:01:52.120 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 03:01:52.120701 | orchestrator | 03:01:52.120 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 03:01:52.120741 | orchestrator | 03:01:52.120 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 03:01:52.120782 | orchestrator | 03:01:52.120 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.120794 | orchestrator | 03:01:52.120 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 03:01:52.120822 | orchestrator | 03:01:52.120 STDOUT terraform:  + config_drive = true 2025-06-01 03:01:52.120866 | orchestrator | 03:01:52.120 STDOUT terraform:  + created = (known after apply) 2025-06-01 03:01:52.120908 | orchestrator | 03:01:52.120 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 03:01:52.120937 | orchestrator | 03:01:52.120 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 03:01:52.120974 | orchestrator | 03:01:52.120 STDOUT terraform:  + force_delete = false 2025-06-01 03:01:52.121038 | orchestrator | 03:01:52.120 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 03:01:52.121080 | orchestrator | 03:01:52.121 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.121126 | orchestrator | 03:01:52.121 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 03:01:52.121163 | orchestrator | 03:01:52.121 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 03:01:52.121200 | orchestrator | 03:01:52.121 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 03:01:52.121212 | orchestrator | 03:01:52.121 STDOUT terraform:  + name = "testbed-node-5" 2025-06-01 03:01:52.121248 | orchestrator | 03:01:52.121 STDOUT terraform:  + power_state = "active" 2025-06-01 03:01:52.121286 | orchestrator | 03:01:52.121 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.121322 | orchestrator | 03:01:52.121 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 03:01:52.121334 | orchestrator | 03:01:52.121 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 03:01:52.121384 | orchestrator | 03:01:52.121 STDOUT terraform:  + updated = (known after apply) 2025-06-01 03:01:52.121462 | orchestrator | 03:01:52.121 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 03:01:52.121480 | orchestrator | 03:01:52.121 STDOUT terraform:  + block_device { 2025-06-01 03:01:52.121492 | orchestrator | 03:01:52.121 STDOUT terraform:  + boot_index = 0 2025-06-01 03:01:52.121529 | orchestrator | 03:01:52.121 STDOUT terraform:  + delete_on_termination = false 2025-06-01 03:01:52.121541 | orchestrator | 03:01:52.121 STDOUT terraform:  + destination_type = "volume" 2025-06-01 03:01:52.121576 | orchestrator | 03:01:52.121 STDOUT terraform:  + multiattach = false 2025-06-01 03:01:52.121613 | orchestrator | 03:01:52.121 STDOUT terraform:  + source_type = "volume" 2025-06-01 03:01:52.121642 | orchestrator | 03:01:52.121 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.121654 | orchestrator | 03:01:52.121 STDOUT terraform:  } 2025-06-01 03:01:52.121664 | orchestrator | 03:01:52.121 STDOUT terraform:  + network { 2025-06-01 03:01:52.121675 | orchestrator | 03:01:52.121 STDOUT terraform:  + access_network = false 2025-06-01 03:01:52.121720 | orchestrator | 03:01:52.121 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 03:01:52.121749 | orchestrator | 03:01:52.121 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 03:01:52.121778 | orchestrator | 03:01:52.121 STDOUT terraform:  + mac = (known after apply) 2025-06-01 03:01:52.121815 | orchestrator | 03:01:52.121 STDOUT terraform:  + name = (known after apply) 2025-06-01 03:01:52.121844 | orchestrator | 03:01:52.121 STDOUT terraform:  + port = (known after apply) 2025-06-01 03:01:52.121872 | orchestrator | 03:01:52.121 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 03:01:52.121884 | orchestrator | 03:01:52.121 STDOUT terraform:  } 2025-06-01 03:01:52.121892 | orchestrator | 03:01:52.121 STDOUT terraform:  } 2025-06-01 03:01:52.121931 | orchestrator | 03:01:52.121 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-01 03:01:52.121961 | orchestrator | 03:01:52.121 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-01 03:01:52.122001 | orchestrator | 03:01:52.121 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-01 03:01:52.122044 | orchestrator | 03:01:52.121 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.122057 | orchestrator | 03:01:52.122 STDOUT terraform:  + name = "testbed" 2025-06-01 03:01:52.122085 | orchestrator | 03:01:52.122 STDOUT terraform:  + private_key = (sensitive value) 2025-06-01 03:01:52.122113 | orchestrator | 03:01:52.122 STDOUT terraform:  + public_key = (known after apply) 2025-06-01 03:01:52.122150 | orchestrator | 03:01:52.122 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.122162 | orchestrator | 03:01:52.122 STDOUT terraform:  + user_id = (known after apply) 2025-06-01 03:01:52.122172 | orchestrator | 03:01:52.122 STDOUT terraform:  } 2025-06-01 03:01:52.122233 | orchestrator | 03:01:52.122 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-01 03:01:52.122284 | orchestrator | 03:01:52.122 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.122313 | orchestrator | 03:01:52.122 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.122350 | orchestrator | 03:01:52.122 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.122362 | orchestrator | 03:01:52.122 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.122399 | orchestrator | 03:01:52.122 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.122437 | orchestrator | 03:01:52.122 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.122456 | orchestrator | 03:01:52.122 STDOUT terraform:  } 2025-06-01 03:01:52.122493 | orchestrator | 03:01:52.122 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-01 03:01:52.122549 | orchestrator | 03:01:52.122 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.122562 | orchestrator | 03:01:52.122 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.122601 | orchestrator | 03:01:52.122 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.122638 | orchestrator | 03:01:52.122 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.122651 | orchestrator | 03:01:52.122 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.122690 | orchestrator | 03:01:52.122 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.122700 | orchestrator | 03:01:52.122 STDOUT terraform:  } 2025-06-01 03:01:52.122751 | orchestrator | 03:01:52.122 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-01 03:01:52.122802 | orchestrator | 03:01:52.122 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.122831 | orchestrator | 03:01:52.122 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.122867 | orchestrator | 03:01:52.122 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.122879 | orchestrator | 03:01:52.122 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.122915 | orchestrator | 03:01:52.122 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.122953 | orchestrator | 03:01:52.122 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.122962 | orchestrator | 03:01:52.122 STDOUT terraform:  } 2025-06-01 03:01:52.123052 | orchestrator | 03:01:52.122 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-01 03:01:52.123095 | orchestrator | 03:01:52.123 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.123132 | orchestrator | 03:01:52.123 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.123145 | orchestrator | 03:01:52.123 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.123180 | orchestrator | 03:01:52.123 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.123209 | orchestrator | 03:01:52.123 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.123221 | orchestrator | 03:01:52.123 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.123232 | orchestrator | 03:01:52.123 STDOUT terraform:  } 2025-06-01 03:01:52.123347 | orchestrator | 03:01:52.123 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-01 03:01:52.123401 | orchestrator | 03:01:52.123 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.123438 | orchestrator | 03:01:52.123 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.123450 | orchestrator | 03:01:52.123 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.123485 | orchestrator | 03:01:52.123 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.123514 | orchestrator | 03:01:52.123 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.123530 | orchestrator | 03:01:52.123 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.123540 | orchestrator | 03:01:52.123 STDOUT terraform:  } 2025-06-01 03:01:52.123596 | orchestrator | 03:01:52.123 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-01 03:01:52.123643 | orchestrator | 03:01:52.123 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.123668 | orchestrator | 03:01:52.123 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.123699 | orchestrator | 03:01:52.123 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.123724 | orchestrator | 03:01:52.123 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.123749 | orchestrator | 03:01:52.123 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.123774 | orchestrator | 03:01:52.123 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.123782 | orchestrator | 03:01:52.123 STDOUT terraform:  } 2025-06-01 03:01:52.123833 | orchestrator | 03:01:52.123 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-01 03:01:52.123882 | orchestrator | 03:01:52.123 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.123907 | orchestrator | 03:01:52.123 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.123932 | orchestrator | 03:01:52.123 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.123957 | orchestrator | 03:01:52.123 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.123982 | orchestrator | 03:01:52.123 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.124008 | orchestrator | 03:01:52.123 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.124019 | orchestrator | 03:01:52.124 STDOUT terraform:  } 2025-06-01 03:01:52.124080 | orchestrator | 03:01:52.124 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-01 03:01:52.124129 | orchestrator | 03:01:52.124 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.124154 | orchestrator | 03:01:52.124 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.124179 | orchestrator | 03:01:52.124 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.124204 | orchestrator | 03:01:52.124 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.124236 | orchestrator | 03:01:52.124 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.124262 | orchestrator | 03:01:52.124 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.124270 | orchestrator | 03:01:52.124 STDOUT terraform:  } 2025-06-01 03:01:52.124319 | orchestrator | 03:01:52.124 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-01 03:01:52.124366 | orchestrator | 03:01:52.124 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 03:01:52.124391 | orchestrator | 03:01:52.124 STDOUT terraform:  + device = (known after apply) 2025-06-01 03:01:52.124416 | orchestrator | 03:01:52.124 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.124441 | orchestrator | 03:01:52.124 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 03:01:52.124466 | orchestrator | 03:01:52.124 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.124491 | orchestrator | 03:01:52.124 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 03:01:52.124499 | orchestrator | 03:01:52.124 STDOUT terraform:  } 2025-06-01 03:01:52.124557 | orchestrator | 03:01:52.124 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-01 03:01:52.124613 | orchestrator | 03:01:52.124 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-01 03:01:52.124638 | orchestrator | 03:01:52.124 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-01 03:01:52.124663 | orchestrator | 03:01:52.124 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-01 03:01:52.124688 | orchestrator | 03:01:52.124 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.124713 | orchestrator | 03:01:52.124 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 03:01:52.124737 | orchestrator | 03:01:52.124 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.124745 | orchestrator | 03:01:52.124 STDOUT terraform:  } 2025-06-01 03:01:52.124795 | orchestrator | 03:01:52.124 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-01 03:01:52.124843 | orchestrator | 03:01:52.124 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-01 03:01:52.124853 | orchestrator | 03:01:52.124 STDOUT terraform:  + address = (known after apply) 2025-06-01 03:01:52.124890 | orchestrator | 03:01:52.124 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.124923 | orchestrator | 03:01:52.124 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-01 03:01:52.124933 | orchestrator | 03:01:52.124 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.124956 | orchestrator | 03:01:52.124 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-01 03:01:52.124981 | orchestrator | 03:01:52.124 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.125002 | orchestrator | 03:01:52.124 STDOUT terraform:  + pool = "public" 2025-06-01 03:01:52.125035 | orchestrator | 03:01:52.124 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 03:01:52.125046 | orchestrator | 03:01:52.125 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.125086 | orchestrator | 03:01:52.125 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.125097 | orchestrator | 03:01:52.125 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.125106 | orchestrator | 03:01:52.125 STDOUT terraform:  } 2025-06-01 03:01:52.125155 | orchestrator | 03:01:52.125 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-01 03:01:52.125199 | orchestrator | 03:01:52.125 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-01 03:01:52.125235 | orchestrator | 03:01:52.125 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.125272 | orchestrator | 03:01:52.125 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.125282 | orchestrator | 03:01:52.125 STDOUT terraform:  + availability_zone_hints = [ 2025-06-01 03:01:52.125314 | orchestrator | 03:01:52.125 STDOUT terraform:  + "nova", 2025-06-01 03:01:52.125322 | orchestrator | 03:01:52.125 STDOUT terraform:  ] 2025-06-01 03:01:52.125355 | orchestrator | 03:01:52.125 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-01 03:01:52.125416 | orchestrator | 03:01:52.125 STDOUT terraform:  + external = (known after apply) 2025-06-01 03:01:52.125456 | orchestrator | 03:01:52.125 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.125494 | orchestrator | 03:01:52.125 STDOUT terraform:  + mtu = (known after apply) 2025-06-01 03:01:52.125533 | orchestrator | 03:01:52.125 STDOUT terraform:  + name = "net-testbed-management" 2025-06-01 03:01:52.125570 | orchestrator | 03:01:52.125 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.125607 | orchestrator | 03:01:52.125 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.125644 | orchestrator | 03:01:52.125 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.125681 | orchestrator | 03:01:52.125 STDOUT terraform:  + shared = (known after apply) 2025-06-01 03:01:52.125718 | orchestrator | 03:01:52.125 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.125755 | orchestrator | 03:01:52.125 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-01 03:01:52.125769 | orchestrator | 03:01:52.125 STDOUT terraform:  + segments (known after apply) 2025-06-01 03:01:52.125778 | orchestrator | 03:01:52.125 STDOUT terraform:  } 2025-06-01 03:01:52.125831 | orchestrator | 03:01:52.125 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-01 03:01:52.125876 | orchestrator | 03:01:52.125 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-01 03:01:52.125913 | orchestrator | 03:01:52.125 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.125949 | orchestrator | 03:01:52.125 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.125999 | orchestrator | 03:01:52.125 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.126127 | orchestrator | 03:01:52.125 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.126175 | orchestrator | 03:01:52.126 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.126194 | orchestrator | 03:01:52.126 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.126205 | orchestrator | 03:01:52.126 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.126215 | orchestrator | 03:01:52.126 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.126240 | orchestrator | 03:01:52.126 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.126251 | orchestrator | 03:01:52.126 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.126278 | orchestrator | 03:01:52.126 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.126318 | orchestrator | 03:01:52.126 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.126354 | orchestrator | 03:01:52.126 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.126392 | orchestrator | 03:01:52.126 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.126430 | orchestrator | 03:01:52.126 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.126468 | orchestrator | 03:01:52.126 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.126483 | orchestrator | 03:01:52.126 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.126514 | orchestrator | 03:01:52.126 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.126529 | orchestrator | 03:01:52.126 STDOUT terraform:  } 2025-06-01 03:01:52.126542 | orchestrator | 03:01:52.126 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.126574 | orchestrator | 03:01:52.126 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.126585 | orchestrator | 03:01:52.126 STDOUT terraform:  } 2025-06-01 03:01:52.126599 | orchestrator | 03:01:52.126 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.126611 | orchestrator | 03:01:52.126 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.126624 | orchestrator | 03:01:52.126 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-01 03:01:52.126663 | orchestrator | 03:01:52.126 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.126678 | orchestrator | 03:01:52.126 STDOUT terraform:  } 2025-06-01 03:01:52.126688 | orchestrator | 03:01:52.126 STDOUT terraform:  } 2025-06-01 03:01:52.126729 | orchestrator | 03:01:52.126 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-01 03:01:52.126775 | orchestrator | 03:01:52.126 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 03:01:52.126812 | orchestrator | 03:01:52.126 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.126849 | orchestrator | 03:01:52.126 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.126884 | orchestrator | 03:01:52.126 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.126921 | orchestrator | 03:01:52.126 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.126958 | orchestrator | 03:01:52.126 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.127018 | orchestrator | 03:01:52.126 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.127064 | orchestrator | 03:01:52.126 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.127106 | orchestrator | 03:01:52.127 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.127139 | orchestrator | 03:01:52.127 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.127176 | orchestrator | 03:01:52.127 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.127213 | orchestrator | 03:01:52.127 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.127250 | orchestrator | 03:01:52.127 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.127288 | orchestrator | 03:01:52.127 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.127324 | orchestrator | 03:01:52.127 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.127361 | orchestrator | 03:01:52.127 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.127398 | orchestrator | 03:01:52.127 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.127413 | orchestrator | 03:01:52.127 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.127443 | orchestrator | 03:01:52.127 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.127458 | orchestrator | 03:01:52.127 STDOUT terraform:  } 2025-06-01 03:01:52.127471 | orchestrator | 03:01:52.127 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.127503 | orchestrator | 03:01:52.127 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 03:01:52.127514 | orchestrator | 03:01:52.127 STDOUT terraform:  } 2025-06-01 03:01:52.127527 | orchestrator | 03:01:52.127 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.127559 | orchestrator | 03:01:52.127 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.127570 | orchestrator | 03:01:52.127 STDOUT terraform:  } 2025-06-01 03:01:52.127583 | orchestrator | 03:01:52.127 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.127615 | orchestrator | 03:01:52.127 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 03:01:52.127627 | orchestrator | 03:01:52.127 STDOUT terraform:  } 2025-06-01 03:01:52.127640 | orchestrator | 03:01:52.127 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.127653 | orchestrator | 03:01:52.127 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.127666 | orchestrator | 03:01:52.127 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-01 03:01:52.127704 | orchestrator | 03:01:52.127 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.127719 | orchestrator | 03:01:52.127 STDOUT terraform:  } 2025-06-01 03:01:52.127729 | orchestrator | 03:01:52.127 STDOUT terraform:  } 2025-06-01 03:01:52.127769 | orchestrator | 03:01:52.127 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-01 03:01:52.127814 | orchestrator | 03:01:52.127 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 03:01:52.127852 | orchestrator | 03:01:52.127 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.127888 | orchestrator | 03:01:52.127 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.127928 | orchestrator | 03:01:52.127 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.127970 | orchestrator | 03:01:52.127 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.128016 | orchestrator | 03:01:52.127 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.128039 | orchestrator | 03:01:52.127 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.128082 | orchestrator | 03:01:52.128 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.128099 | orchestrator | 03:01:52.128 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.128147 | orchestrator | 03:01:52.128 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.128184 | orchestrator | 03:01:52.128 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.128221 | orchestrator | 03:01:52.128 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.128257 | orchestrator | 03:01:52.128 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.128294 | orchestrator | 03:01:52.128 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.128330 | orchestrator | 03:01:52.128 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.128366 | orchestrator | 03:01:52.128 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.128403 | orchestrator | 03:01:52.128 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.128418 | orchestrator | 03:01:52.128 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.128450 | orchestrator | 03:01:52.128 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.128464 | orchestrator | 03:01:52.128 STDOUT terraform:  } 2025-06-01 03:01:52.128477 | orchestrator | 03:01:52.128 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.128501 | orchestrator | 03:01:52.128 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 03:01:52.128515 | orchestrator | 03:01:52.128 STDOUT terraform:  } 2025-06-01 03:01:52.128528 | orchestrator | 03:01:52.128 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.128559 | orchestrator | 03:01:52.128 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.128573 | orchestrator | 03:01:52.128 STDOUT terraform:  } 2025-06-01 03:01:52.128586 | orchestrator | 03:01:52.128 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.128618 | orchestrator | 03:01:52.128 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 03:01:52.128629 | orchestrator | 03:01:52.128 STDOUT terraform:  } 2025-06-01 03:01:52.128642 | orchestrator | 03:01:52.128 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.128655 | orchestrator | 03:01:52.128 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.128669 | orchestrator | 03:01:52.128 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-01 03:01:52.128705 | orchestrator | 03:01:52.128 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.128720 | orchestrator | 03:01:52.128 STDOUT terraform:  } 2025-06-01 03:01:52.128737 | orchestrator | 03:01:52.128 STDOUT terraform:  } 2025-06-01 03:01:52.128772 | orchestrator | 03:01:52.128 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-01 03:01:52.128817 | orchestrator | 03:01:52.128 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 03:01:52.128853 | orchestrator | 03:01:52.128 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.128889 | orchestrator | 03:01:52.128 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.128925 | orchestrator | 03:01:52.128 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.128962 | orchestrator | 03:01:52.128 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.129016 | orchestrator | 03:01:52.128 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.129033 | orchestrator | 03:01:52.128 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.129078 | orchestrator | 03:01:52.129 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.129113 | orchestrator | 03:01:52.129 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.129157 | orchestrator | 03:01:52.129 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.129192 | orchestrator | 03:01:52.129 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.129229 | orchestrator | 03:01:52.129 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.129264 | orchestrator | 03:01:52.129 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.129303 | orchestrator | 03:01:52.129 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.129339 | orchestrator | 03:01:52.129 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.129374 | orchestrator | 03:01:52.129 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.129410 | orchestrator | 03:01:52.129 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.129425 | orchestrator | 03:01:52.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.129456 | orchestrator | 03:01:52.129 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.129470 | orchestrator | 03:01:52.129 STDOUT terraform:  } 2025-06-01 03:01:52.129483 | orchestrator | 03:01:52.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.129505 | orchestrator | 03:01:52.129 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 03:01:52.129518 | orchestrator | 03:01:52.129 STDOUT terraform:  } 2025-06-01 03:01:52.129531 | orchestrator | 03:01:52.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.129564 | orchestrator | 03:01:52.129 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.129579 | orchestrator | 03:01:52.129 STDOUT terraform:  } 2025-06-01 03:01:52.129631 | orchestrator | 03:01:52.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.129647 | orchestrator | 03:01:52.129 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 03:01:52.129669 | orchestrator | 03:01:52.129 STDOUT terraform:  } 2025-06-01 03:01:52.129679 | orchestrator | 03:01:52.129 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.129692 | orchestrator | 03:01:52.129 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.129702 | orchestrator | 03:01:52.129 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-01 03:01:52.129715 | orchestrator | 03:01:52.129 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.129725 | orchestrator | 03:01:52.129 STDOUT terraform:  } 2025-06-01 03:01:52.129735 | orchestrator | 03:01:52.129 STDOUT terraform:  } 2025-06-01 03:01:52.129770 | orchestrator | 03:01:52.129 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-01 03:01:52.129817 | orchestrator | 03:01:52.129 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 03:01:52.129863 | orchestrator | 03:01:52.129 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.129909 | orchestrator | 03:01:52.129 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.129923 | orchestrator | 03:01:52.129 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.129965 | orchestrator | 03:01:52.129 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.130062 | orchestrator | 03:01:52.129 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.130081 | orchestrator | 03:01:52.129 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.130092 | orchestrator | 03:01:52.130 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.130141 | orchestrator | 03:01:52.130 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.130156 | orchestrator | 03:01:52.130 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.130194 | orchestrator | 03:01:52.130 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.130243 | orchestrator | 03:01:52.130 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.130258 | orchestrator | 03:01:52.130 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.130296 | orchestrator | 03:01:52.130 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.130335 | orchestrator | 03:01:52.130 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.130349 | orchestrator | 03:01:52.130 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.130402 | orchestrator | 03:01:52.130 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.130414 | orchestrator | 03:01:52.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.130427 | orchestrator | 03:01:52.130 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.130440 | orchestrator | 03:01:52.130 STDOUT terraform:  } 2025-06-01 03:01:52.130453 | orchestrator | 03:01:52.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.130503 | orchestrator | 03:01:52.130 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 03:01:52.130522 | orchestrator | 03:01:52.130 STDOUT terraform:  } 2025-06-01 03:01:52.130535 | orchestrator | 03:01:52.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.130548 | orchestrator | 03:01:52.130 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.130558 | orchestrator | 03:01:52.130 STDOUT terraform:  } 2025-06-01 03:01:52.130570 | orchestrator | 03:01:52.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.130609 | orchestrator | 03:01:52.130 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 03:01:52.130620 | orchestrator | 03:01:52.130 STDOUT terraform:  } 2025-06-01 03:01:52.130633 | orchestrator | 03:01:52.130 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.130643 | orchestrator | 03:01:52.130 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.130656 | orchestrator | 03:01:52.130 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-01 03:01:52.130694 | orchestrator | 03:01:52.130 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.130705 | orchestrator | 03:01:52.130 STDOUT terraform:  } 2025-06-01 03:01:52.130718 | orchestrator | 03:01:52.130 STDOUT terraform:  } 2025-06-01 03:01:52.130756 | orchestrator | 03:01:52.130 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-01 03:01:52.130796 | orchestrator | 03:01:52.130 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 03:01:52.130846 | orchestrator | 03:01:52.130 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.130860 | orchestrator | 03:01:52.130 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.130898 | orchestrator | 03:01:52.130 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.130918 | orchestrator | 03:01:52.130 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.130967 | orchestrator | 03:01:52.130 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.130982 | orchestrator | 03:01:52.130 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.131114 | orchestrator | 03:01:52.130 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.131129 | orchestrator | 03:01:52.131 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.131143 | orchestrator | 03:01:52.131 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.131155 | orchestrator | 03:01:52.131 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.131204 | orchestrator | 03:01:52.131 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.131219 | orchestrator | 03:01:52.131 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.131272 | orchestrator | 03:01:52.131 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.131287 | orchestrator | 03:01:52.131 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.131346 | orchestrator | 03:01:52.131 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.131369 | orchestrator | 03:01:52.131 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.131379 | orchestrator | 03:01:52.131 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.131403 | orchestrator | 03:01:52.131 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.131417 | orchestrator | 03:01:52.131 STDOUT terraform:  } 2025-06-01 03:01:52.131430 | orchestrator | 03:01:52.131 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.131469 | orchestrator | 03:01:52.131 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 03:01:52.131481 | orchestrator | 03:01:52.131 STDOUT terraform:  } 2025-06-01 03:01:52.131494 | orchestrator | 03:01:52.131 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.131506 | orchestrator | 03:01:52.131 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.131519 | orchestrator | 03:01:52.131 STDOUT terraform:  } 2025-06-01 03:01:52.131532 | orchestrator | 03:01:52.131 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.131580 | orchestrator | 03:01:52.131 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 03:01:52.131590 | orchestrator | 03:01:52.131 STDOUT terraform:  } 2025-06-01 03:01:52.131601 | orchestrator | 03:01:52.131 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.131609 | orchestrator | 03:01:52.131 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.131620 | orchestrator | 03:01:52.131 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-01 03:01:52.131667 | orchestrator | 03:01:52.131 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.131677 | orchestrator | 03:01:52.131 STDOUT terraform:  } 2025-06-01 03:01:52.131687 | orchestrator | 03:01:52.131 STDOUT terraform:  } 2025-06-01 03:01:52.131734 | orchestrator | 03:01:52.131 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-01 03:01:52.131779 | orchestrator | 03:01:52.131 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 03:01:52.131812 | orchestrator | 03:01:52.131 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.131846 | orchestrator | 03:01:52.131 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 03:01:52.131887 | orchestrator | 03:01:52.131 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 03:01:52.131899 | orchestrator | 03:01:52.131 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.131949 | orchestrator | 03:01:52.131 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 03:01:52.131982 | orchestrator | 03:01:52.131 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 03:01:52.132016 | orchestrator | 03:01:52.131 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 03:01:52.132061 | orchestrator | 03:01:52.132 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 03:01:52.132095 | orchestrator | 03:01:52.132 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.132128 | orchestrator | 03:01:52.132 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 03:01:52.132146 | orchestrator | 03:01:52.132 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.132194 | orchestrator | 03:01:52.132 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 03:01:52.132228 | orchestrator | 03:01:52.132 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 03:01:52.132262 | orchestrator | 03:01:52.132 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.132303 | orchestrator | 03:01:52.132 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 03:01:52.132315 | orchestrator | 03:01:52.132 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.132337 | orchestrator | 03:01:52.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.132370 | orchestrator | 03:01:52.132 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 03:01:52.132379 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132390 | orchestrator | 03:01:52.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.132422 | orchestrator | 03:01:52.132 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 03:01:52.132431 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132442 | orchestrator | 03:01:52.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.132475 | orchestrator | 03:01:52.132 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 03:01:52.132484 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132494 | orchestrator | 03:01:52.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 03:01:52.132526 | orchestrator | 03:01:52.132 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 03:01:52.132536 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132546 | orchestrator | 03:01:52.132 STDOUT terraform:  + binding (known after apply) 2025-06-01 03:01:52.132557 | orchestrator | 03:01:52.132 STDOUT terraform:  + fixed_ip { 2025-06-01 03:01:52.132598 | orchestrator | 03:01:52.132 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-01 03:01:52.132610 | orchestrator | 03:01:52.132 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.132621 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132631 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132691 | orchestrator | 03:01:52.132 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-01 03:01:52.132739 | orchestrator | 03:01:52.132 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-01 03:01:52.132749 | orchestrator | 03:01:52.132 STDOUT terraform:  + force_destroy = false 2025-06-01 03:01:52.132782 | orchestrator | 03:01:52.132 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.132793 | orchestrator | 03:01:52.132 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 03:01:52.132835 | orchestrator | 03:01:52.132 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.132847 | orchestrator | 03:01:52.132 STDOUT terraform:  + router_id = (known after apply) 2025-06-01 03:01:52.132885 | orchestrator | 03:01:52.132 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 03:01:52.132895 | orchestrator | 03:01:52.132 STDOUT terraform:  } 2025-06-01 03:01:52.132927 | orchestrator | 03:01:52.132 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-01 03:01:52.132960 | orchestrator | 03:01:52.132 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-01 03:01:52.133012 | orchestrator | 03:01:52.132 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 03:01:52.133033 | orchestrator | 03:01:52.132 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.133049 | orchestrator | 03:01:52.133 STDOUT terraform:  + availability_zone_hints = [ 2025-06-01 03:01:52.133065 | orchestrator | 03:01:52.133 STDOUT terraform:  + "nova", 2025-06-01 03:01:52.133079 | orchestrator | 03:01:52.133 STDOUT terraform:  ] 2025-06-01 03:01:52.133099 | orchestrator | 03:01:52.133 STDOUT terraform:  + distributed = (known after apply) 2025-06-01 03:01:52.133143 | orchestrator | 03:01:52.133 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-01 03:01:52.133189 | orchestrator | 03:01:52.133 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-01 03:01:52.133235 | orchestrator | 03:01:52.133 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.133247 | orchestrator | 03:01:52.133 STDOUT terraform:  + name = "testbed" 2025-06-01 03:01:52.133289 | orchestrator | 03:01:52.133 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.133323 | orchestrator | 03:01:52.133 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.133335 | orchestrator | 03:01:52.133 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-01 03:01:52.133345 | orchestrator | 03:01:52.133 STDOUT terraform:  } 2025-06-01 03:01:52.133411 | orchestrator | 03:01:52.133 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-01 03:01:52.133464 | orchestrator | 03:01:52.133 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-01 03:01:52.133476 | orchestrator | 03:01:52.133 STDOUT terraform:  + description = "ssh" 2025-06-01 03:01:52.133487 | orchestrator | 03:01:52.133 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.133519 | orchestrator | 03:01:52.133 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.133531 | orchestrator | 03:01:52.133 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.133563 | orchestrator | 03:01:52.133 STDOUT terraform:  + port_range_max = 22 2025-06-01 03:01:52.133573 | orchestrator | 03:01:52.133 STDOUT terraform:  + port_range_min = 22 2025-06-01 03:01:52.133583 | orchestrator | 03:01:52.133 STDOUT terraform:  + protocol = "tcp" 2025-06-01 03:01:52.133625 | orchestrator | 03:01:52.133 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.133637 | orchestrator | 03:01:52.133 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.133669 | orchestrator | 03:01:52.133 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.133689 | orchestrator | 03:01:52.133 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.133728 | orchestrator | 03:01:52.133 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.133738 | orchestrator | 03:01:52.133 STDOUT terraform:  } 2025-06-01 03:01:52.133791 | orchestrator | 03:01:52.133 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-01 03:01:52.133844 | orchestrator | 03:01:52.133 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-01 03:01:52.133856 | orchestrator | 03:01:52.133 STDOUT terraform:  + description = "wireguard" 2025-06-01 03:01:52.133897 | orchestrator | 03:01:52.133 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.133906 | orchestrator | 03:01:52.133 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.133917 | orchestrator | 03:01:52.133 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.133949 | orchestrator | 03:01:52.133 STDOUT terraform:  + port_range_max = 51820 2025-06-01 03:01:52.133960 | orchestrator | 03:01:52.133 STDOUT terraform:  + port_range_min = 51820 2025-06-01 03:01:52.133971 | orchestrator | 03:01:52.133 STDOUT terraform:  + protocol = "udp" 2025-06-01 03:01:52.134180 | orchestrator | 03:01:52.133 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.134259 | orchestrator | 03:01:52.134 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.134274 | orchestrator | 03:01:52.134 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.134285 | orchestrator | 03:01:52.134 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.134308 | orchestrator | 03:01:52.134 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.134337 | orchestrator | 03:01:52.134 STDOUT terraform:  } 2025-06-01 03:01:52.134349 | orchestrator | 03:01:52.134 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-01 03:01:52.134362 | orchestrator | 03:01:52.134 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-01 03:01:52.134373 | orchestrator | 03:01:52.134 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.134384 | orchestrator | 03:01:52.134 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.134403 | orchestrator | 03:01:52.134 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.134415 | orchestrator | 03:01:52.134 STDOUT terraform:  + protocol = "tcp" 2025-06-01 03:01:52.134426 | orchestrator | 03:01:52.134 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.134437 | orchestrator | 03:01:52.134 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.134452 | orchestrator | 03:01:52.134 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-01 03:01:52.134463 | orchestrator | 03:01:52.134 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.134478 | orchestrator | 03:01:52.134 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.134536 | orchestrator | 03:01:52.134 STDOUT terraform:  } 2025-06-01 03:01:52.134554 | orchestrator | 03:01:52.134 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-01 03:01:52.134594 | orchestrator | 03:01:52.134 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-01 03:01:52.134610 | orchestrator | 03:01:52.134 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.134624 | orchestrator | 03:01:52.134 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.134652 | orchestrator | 03:01:52.134 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.134667 | orchestrator | 03:01:52.134 STDOUT terraform:  + protocol = "udp" 2025-06-01 03:01:52.134708 | orchestrator | 03:01:52.134 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.134725 | orchestrator | 03:01:52.134 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.134763 | orchestrator | 03:01:52.134 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-01 03:01:52.134780 | orchestrator | 03:01:52.134 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.134820 | orchestrator | 03:01:52.134 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.134837 | orchestrator | 03:01:52.134 STDOUT terraform:  } 2025-06-01 03:01:52.134886 | orchestrator | 03:01:52.134 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-01 03:01:52.134938 | orchestrator | 03:01:52.134 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-01 03:01:52.134955 | orchestrator | 03:01:52.134 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.134969 | orchestrator | 03:01:52.134 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.135035 | orchestrator | 03:01:52.134 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.135050 | orchestrator | 03:01:52.134 STDOUT terraform:  + protocol = "icmp" 2025-06-01 03:01:52.135065 | orchestrator | 03:01:52.135 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.135079 | orchestrator | 03:01:52.135 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.135093 | orchestrator | 03:01:52.135 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.135133 | orchestrator | 03:01:52.135 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.135149 | orchestrator | 03:01:52.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.135164 | orchestrator | 03:01:52.135 STDOUT terraform:  } 2025-06-01 03:01:52.135219 | orchestrator | 03:01:52.135 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-01 03:01:52.135270 | orchestrator | 03:01:52.135 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-01 03:01:52.135287 | orchestrator | 03:01:52.135 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.135301 | orchestrator | 03:01:52.135 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.135326 | orchestrator | 03:01:52.135 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.135350 | orchestrator | 03:01:52.135 STDOUT terraform:  + protocol = "tcp" 2025-06-01 03:01:52.135375 | orchestrator | 03:01:52.135 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.135411 | orchestrator | 03:01:52.135 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.135427 | orchestrator | 03:01:52.135 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.135462 | orchestrator | 03:01:52.135 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.135478 | orchestrator | 03:01:52.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.135493 | orchestrator | 03:01:52.135 STDOUT terraform:  } 2025-06-01 03:01:52.135555 | orchestrator | 03:01:52.135 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-01 03:01:52.135596 | orchestrator | 03:01:52.135 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-01 03:01:52.135611 | orchestrator | 03:01:52.135 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.135626 | orchestrator | 03:01:52.135 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.135665 | orchestrator | 03:01:52.135 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.135680 | orchestrator | 03:01:52.135 STDOUT terraform:  + protocol = "udp" 2025-06-01 03:01:52.135713 | orchestrator | 03:01:52.135 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.135743 | orchestrator | 03:01:52.135 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.135758 | orchestrator | 03:01:52.135 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.135793 | orchestrator | 03:01:52.135 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.135823 | orchestrator | 03:01:52.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.135838 | orchestrator | 03:01:52.135 STDOUT terraform:  } 2025-06-01 03:01:52.135883 | orchestrator | 03:01:52.135 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-01 03:01:52.135937 | orchestrator | 03:01:52.135 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-01 03:01:52.135953 | orchestrator | 03:01:52.135 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.135968 | orchestrator | 03:01:52.135 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.136022 | orchestrator | 03:01:52.135 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.136040 | orchestrator | 03:01:52.136 STDOUT terraform:  + protocol = "icmp" 2025-06-01 03:01:52.136054 | orchestrator | 03:01:52.136 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.136090 | orchestrator | 03:01:52.136 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.136106 | orchestrator | 03:01:52.136 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.136143 | orchestrator | 03:01:52.136 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.136168 | orchestrator | 03:01:52.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.136182 | orchestrator | 03:01:52.136 STDOUT terraform:  } 2025-06-01 03:01:52.136229 | orchestrator | 03:01:52.136 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-01 03:01:52.136279 | orchestrator | 03:01:52.136 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-01 03:01:52.136296 | orchestrator | 03:01:52.136 STDOUT terraform:  + description = "vrrp" 2025-06-01 03:01:52.136310 | orchestrator | 03:01:52.136 STDOUT terraform:  + direction = "ingress" 2025-06-01 03:01:52.136324 | orchestrator | 03:01:52.136 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 03:01:52.136364 | orchestrator | 03:01:52.136 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.136386 | orchestrator | 03:01:52.136 STDOUT terraform:  + protocol = "112" 2025-06-01 03:01:52.136400 | orchestrator | 03:01:52.136 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.136455 | orchestrator | 03:01:52.136 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 03:01:52.136468 | orchestrator | 03:01:52.136 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 03:01:52.136483 | orchestrator | 03:01:52.136 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 03:01:52.136519 | orchestrator | 03:01:52.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.136544 | orchestrator | 03:01:52.136 STDOUT terraform:  } 2025-06-01 03:01:52.136575 | orchestrator | 03:01:52.136 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-01 03:01:52.136626 | orchestrator | 03:01:52.136 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-01 03:01:52.136642 | orchestrator | 03:01:52.136 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.136697 | orchestrator | 03:01:52.136 STDOUT terraform:  + description = "management security group" 2025-06-01 03:01:52.136714 | orchestrator | 03:01:52.136 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.136729 | orchestrator | 03:01:52.136 STDOUT terraform:  + name = "testbed-management" 2025-06-01 03:01:52.136767 | orchestrator | 03:01:52.136 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.136784 | orchestrator | 03:01:52.136 STDOUT terraform:  + stateful = (known after apply) 2025-06-01 03:01:52.136814 | orchestrator | 03:01:52.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.136828 | orchestrator | 03:01:52.136 STDOUT terraform:  } 2025-06-01 03:01:52.136872 | orchestrator | 03:01:52.136 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-01 03:01:52.136915 | orchestrator | 03:01:52.136 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-01 03:01:52.136931 | orchestrator | 03:01:52.136 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.137018 | orchestrator | 03:01:52.136 STDOUT terraform:  + description = "node security group" 2025-06-01 03:01:52.137065 | orchestrator | 03:01:52.136 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.137083 | orchestrator | 03:01:52.136 STDOUT terraform:  + name = "testbed-node" 2025-06-01 03:01:52.137094 | orchestrator | 03:01:52.137 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.137105 | orchestrator | 03:01:52.137 STDOUT terraform:  + stateful = (known after apply) 2025-06-01 03:01:52.137120 | orchestrator | 03:01:52.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.137131 | orchestrator | 03:01:52.137 STDOUT terraform:  } 2025-06-01 03:01:52.137145 | orchestrator | 03:01:52.137 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-01 03:01:52.137189 | orchestrator | 03:01:52.137 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-01 03:01:52.137220 | orchestrator | 03:01:52.137 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 03:01:52.137254 | orchestrator | 03:01:52.137 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-01 03:01:52.137270 | orchestrator | 03:01:52.137 STDOUT terraform:  + dns_nameservers = [ 2025-06-01 03:01:52.137285 | orchestrator | 03:01:52.137 STDOUT terraform:  + "8.8.8.8", 2025-06-01 03:01:52.137296 | orchestrator | 03:01:52.137 STDOUT terraform:  + "9.9.9.9", 2025-06-01 03:01:52.137310 | orchestrator | 03:01:52.137 STDOUT terraform:  ] 2025-06-01 03:01:52.137324 | orchestrator | 03:01:52.137 STDOUT terraform:  + enable_dhcp = true 2025-06-01 03:01:52.137338 | orchestrator | 03:01:52.137 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-01 03:01:52.137380 | orchestrator | 03:01:52.137 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.137397 | orchestrator | 03:01:52.137 STDOUT terraform:  + ip_version = 4 2025-06-01 03:01:52.137421 | orchestrator | 03:01:52.137 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-01 03:01:52.137451 | orchestrator | 03:01:52.137 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-01 03:01:52.137495 | orchestrator | 03:01:52.137 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-01 03:01:52.137511 | orchestrator | 03:01:52.137 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 03:01:52.137539 | orchestrator | 03:01:52.137 STDOUT terraform:  + no_gateway = false 2025-06-01 03:01:52.137570 | orchestrator | 03:01:52.137 STDOUT terraform:  + region = (known after apply) 2025-06-01 03:01:52.137602 | orchestrator | 03:01:52.137 STDOUT terraform:  + service_types = (known after apply) 2025-06-01 03:01:52.137636 | orchestrator | 03:01:52.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 03:01:52.137652 | orchestrator | 03:01:52.137 STDOUT terraform:  + allocation_pool { 2025-06-01 03:01:52.137666 | orchestrator | 03:01:52.137 STDOUT terraform:  + end = "192.168.31.250" 2025-06-01 03:01:52.137680 | orchestrator | 03:01:52.137 STDOUT terraform:  + start = "192.168.31.200" 2025-06-01 03:01:52.137694 | orchestrator | 03:01:52.137 STDOUT terraform:  } 2025-06-01 03:01:52.137708 | orchestrator | 03:01:52.137 STDOUT terraform:  } 2025-06-01 03:01:52.137730 | orchestrator | 03:01:52.137 STDOUT terraform:  # terraform_data.image will be created 2025-06-01 03:01:52.137773 | orchestrator | 03:01:52.137 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-01 03:01:52.137789 | orchestrator | 03:01:52.137 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.137800 | orchestrator | 03:01:52.137 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-01 03:01:52.137814 | orchestrator | 03:01:52.137 STDOUT terraform:  + output = (known after apply) 2025-06-01 03:01:52.137825 | orchestrator | 03:01:52.137 STDOUT terraform:  } 2025-06-01 03:01:52.137839 | orchestrator | 03:01:52.137 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-01 03:01:52.137867 | orchestrator | 03:01:52.137 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-01 03:01:52.137882 | orchestrator | 03:01:52.137 STDOUT terraform:  + id = (known after apply) 2025-06-01 03:01:52.137896 | orchestrator | 03:01:52.137 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-01 03:01:52.137937 | orchestrator | 03:01:52.137 STDOUT terraform:  + output = (known after apply) 2025-06-01 03:01:52.137949 | orchestrator | 03:01:52.137 STDOUT terraform:  } 2025-06-01 03:01:52.137963 | orchestrator | 03:01:52.137 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-01 03:01:52.137977 | orchestrator | 03:01:52.137 STDOUT terraform: Changes to Outputs: 2025-06-01 03:01:52.138121 | orchestrator | 03:01:52.137 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-01 03:01:52.138139 | orchestrator | 03:01:52.137 STDOUT terraform:  + private_key = (sensitive value) 2025-06-01 03:01:52.352250 | orchestrator | 03:01:52.350 STDOUT terraform: terraform_data.image: Creating... 2025-06-01 03:01:52.352338 | orchestrator | 03:01:52.350 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-01 03:01:52.352353 | orchestrator | 03:01:52.350 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5e7968ad-3eb7-9012-8bb6-d8dce47d8de0] 2025-06-01 03:01:52.352368 | orchestrator | 03:01:52.350 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=623336be-6808-4b2c-6f59-96e4c4756116] 2025-06-01 03:01:52.362977 | orchestrator | 03:01:52.362 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-01 03:01:52.370508 | orchestrator | 03:01:52.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-01 03:01:52.371649 | orchestrator | 03:01:52.371 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-01 03:01:52.371883 | orchestrator | 03:01:52.371 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-01 03:01:52.372524 | orchestrator | 03:01:52.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-01 03:01:52.372923 | orchestrator | 03:01:52.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-01 03:01:52.375023 | orchestrator | 03:01:52.374 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-01 03:01:52.377122 | orchestrator | 03:01:52.376 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-01 03:01:52.378437 | orchestrator | 03:01:52.378 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-01 03:01:52.379343 | orchestrator | 03:01:52.379 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-01 03:01:52.867608 | orchestrator | 03:01:52.867 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-01 03:01:52.876473 | orchestrator | 03:01:52.876 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-01 03:01:52.880761 | orchestrator | 03:01:52.880 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-01 03:01:52.891018 | orchestrator | 03:01:52.890 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-01 03:01:52.892538 | orchestrator | 03:01:52.892 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-01 03:01:52.899058 | orchestrator | 03:01:52.898 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-01 03:01:58.483449 | orchestrator | 03:01:58.483 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=61f8980e-7af4-4015-86ec-ff58270b3a3b] 2025-06-01 03:01:58.496714 | orchestrator | 03:01:58.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-01 03:02:02.372873 | orchestrator | 03:02:02.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-01 03:02:02.373719 | orchestrator | 03:02:02.373 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-01 03:02:02.378197 | orchestrator | 03:02:02.377 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-01 03:02:02.379172 | orchestrator | 03:02:02.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-01 03:02:02.382550 | orchestrator | 03:02:02.382 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-01 03:02:02.384677 | orchestrator | 03:02:02.384 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-01 03:02:02.877772 | orchestrator | 03:02:02.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-01 03:02:02.891896 | orchestrator | 03:02:02.891 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-01 03:02:02.900211 | orchestrator | 03:02:02.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-01 03:02:02.955572 | orchestrator | 03:02:02.954 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=13757f92-d131-4fb2-97b0-30fa6d4a703c] 2025-06-01 03:02:02.965044 | orchestrator | 03:02:02.964 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=5b466634-774d-43fb-b203-3068f5674087] 2025-06-01 03:02:02.965351 | orchestrator | 03:02:02.965 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-01 03:02:02.973675 | orchestrator | 03:02:02.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-01 03:02:02.978232 | orchestrator | 03:02:02.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=52cdef25-f5ea-459b-a3d2-6dc79872de85] 2025-06-01 03:02:02.990187 | orchestrator | 03:02:02.989 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-01 03:02:03.002946 | orchestrator | 03:02:03.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=f8222133-3d15-437e-b81b-973910c5fe79] 2025-06-01 03:02:03.004734 | orchestrator | 03:02:03.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=2bf032b4-821f-4153-a16b-c7c7b9690c3c] 2025-06-01 03:02:03.011316 | orchestrator | 03:02:03.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-01 03:02:03.011504 | orchestrator | 03:02:03.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-01 03:02:03.016897 | orchestrator | 03:02:03.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=48a1c260-3052-4e59-9db5-94630d6736af] 2025-06-01 03:02:03.023314 | orchestrator | 03:02:03.023 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-01 03:02:03.100497 | orchestrator | 03:02:03.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9] 2025-06-01 03:02:03.117688 | orchestrator | 03:02:03.117 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=1fa93f47-9163-4651-815b-24671ddef110] 2025-06-01 03:02:03.123462 | orchestrator | 03:02:03.123 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-01 03:02:03.128231 | orchestrator | 03:02:03.128 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=fb5a2203cf3ada585e4d3059e37e8a1001b50c29] 2025-06-01 03:02:03.131610 | orchestrator | 03:02:03.131 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=88d52e43-2c9d-46e0-bf5e-2238e33d97a2] 2025-06-01 03:02:03.136416 | orchestrator | 03:02:03.136 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-01 03:02:03.140756 | orchestrator | 03:02:03.140 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-01 03:02:03.144721 | orchestrator | 03:02:03.144 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=3609b7c9bf80f4b7dc806e32d1d5cddf607c75cc] 2025-06-01 03:02:08.499844 | orchestrator | 03:02:08.499 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-01 03:02:08.847891 | orchestrator | 03:02:08.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=866e2d9c-bebc-4a8f-8f48-25266a5b8758] 2025-06-01 03:02:09.332022 | orchestrator | 03:02:09.331 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=45250c26-f9b7-4d7a-a631-2b6399a87ec1] 2025-06-01 03:02:09.341985 | orchestrator | 03:02:09.341 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-01 03:02:12.966303 | orchestrator | 03:02:12.965 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-01 03:02:12.974634 | orchestrator | 03:02:12.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-01 03:02:12.991062 | orchestrator | 03:02:12.990 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-01 03:02:13.012357 | orchestrator | 03:02:13.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-01 03:02:13.012568 | orchestrator | 03:02:13.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-01 03:02:13.023570 | orchestrator | 03:02:13.023 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-01 03:02:13.394316 | orchestrator | 03:02:13.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=60eef7c2-e85a-474a-b822-4cdf08490182] 2025-06-01 03:02:14.387251 | orchestrator | 03:02:14.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=87bf084a-b980-43ed-ba9b-a8dc90a62403] 2025-06-01 03:02:14.387665 | orchestrator | 03:02:14.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=f2e291b1-f353-4be0-8ae6-8e4ff272a509] 2025-06-01 03:02:14.391258 | orchestrator | 03:02:14.390 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=360dd9b1-930d-49be-ab9f-7b080f656ebe] 2025-06-01 03:02:14.392327 | orchestrator | 03:02:14.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=9720f4a6-d2e6-4f67-b6f6-fba741bae89b] 2025-06-01 03:02:14.393078 | orchestrator | 03:02:14.392 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=45c371e1-26fd-4496-bbdc-2b2c1e33cf22] 2025-06-01 03:02:16.876045 | orchestrator | 03:02:16.875 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=249340e9-530f-44e8-a4fc-9fbf28b67f41] 2025-06-01 03:02:16.882668 | orchestrator | 03:02:16.882 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-01 03:02:16.883861 | orchestrator | 03:02:16.883 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-01 03:02:16.884619 | orchestrator | 03:02:16.884 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-01 03:02:17.086908 | orchestrator | 03:02:17.086 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=7af63c9a-2f30-4a28-b566-0855dc1ec3ae] 2025-06-01 03:02:17.101402 | orchestrator | 03:02:17.101 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-01 03:02:17.104532 | orchestrator | 03:02:17.104 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-01 03:02:17.106686 | orchestrator | 03:02:17.106 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-01 03:02:17.111097 | orchestrator | 03:02:17.110 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-01 03:02:17.111785 | orchestrator | 03:02:17.111 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4e25acb4-c878-48ea-9339-943edb8801ac] 2025-06-01 03:02:17.112454 | orchestrator | 03:02:17.112 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-01 03:02:17.116516 | orchestrator | 03:02:17.116 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-01 03:02:17.116911 | orchestrator | 03:02:17.116 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-01 03:02:17.121560 | orchestrator | 03:02:17.121 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-01 03:02:17.125475 | orchestrator | 03:02:17.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-01 03:02:17.603851 | orchestrator | 03:02:17.603 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=5d9e6bee-5bfd-4497-9bf5-ed91fbc94967] 2025-06-01 03:02:17.620380 | orchestrator | 03:02:17.619 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-01 03:02:17.873711 | orchestrator | 03:02:17.873 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=b14e0b48-ee64-467a-8ea9-8337e31dd446] 2025-06-01 03:02:17.886611 | orchestrator | 03:02:17.886 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-01 03:02:18.045911 | orchestrator | 03:02:18.045 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=fc215a8d-f0e7-45a4-97bc-8d0754ad4347] 2025-06-01 03:02:18.059876 | orchestrator | 03:02:18.059 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-01 03:02:18.250430 | orchestrator | 03:02:18.250 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=c2c88f93-7e6d-4104-9758-62d5de1fa091] 2025-06-01 03:02:18.258839 | orchestrator | 03:02:18.258 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-01 03:02:18.405403 | orchestrator | 03:02:18.405 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=94f09b4a-88c4-40c7-b359-5f632d27a0a0] 2025-06-01 03:02:18.412384 | orchestrator | 03:02:18.412 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-01 03:02:18.423179 | orchestrator | 03:02:18.422 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e0a845a9-93ab-4606-b362-a47896d84f7c] 2025-06-01 03:02:18.429323 | orchestrator | 03:02:18.429 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-01 03:02:18.560520 | orchestrator | 03:02:18.560 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=ba19b4c2-74c3-4cd9-8f51-3dc79aca8708] 2025-06-01 03:02:18.568121 | orchestrator | 03:02:18.567 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-01 03:02:18.707762 | orchestrator | 03:02:18.707 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=981869dc-800e-4764-afa3-e00acb5c0bcb] 2025-06-01 03:02:18.931766 | orchestrator | 03:02:18.931 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=57844234-76c7-4888-a3a3-2ae6a32c1c01] 2025-06-01 03:02:22.727205 | orchestrator | 03:02:22.726 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=1b8320e0-2b13-45fe-b828-248153fcfc53] 2025-06-01 03:02:22.779531 | orchestrator | 03:02:22.779 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=18b157c4-66cc-49a6-9c19-4209690e3423] 2025-06-01 03:02:22.803217 | orchestrator | 03:02:22.802 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=233bab93-c8db-4d4c-aa6e-d9f75d0e8cf9] 2025-06-01 03:02:23.011030 | orchestrator | 03:02:23.010 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=250b1526-cec1-43f1-8f85-4c52ac0091b8] 2025-06-01 03:02:23.285692 | orchestrator | 03:02:23.285 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=b7c00bda-db82-44b8-801d-a3f2f963f1e4] 2025-06-01 03:02:23.317883 | orchestrator | 03:02:23.317 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=f0016548-594a-469a-9bce-bb69a03694ca] 2025-06-01 03:02:23.582711 | orchestrator | 03:02:23.582 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=9333ce4f-75ed-4d4e-9188-b877c6624712] 2025-06-01 03:02:24.403172 | orchestrator | 03:02:24.402 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=b773b484-83e2-4c8a-a78d-d2dd7022fa3c] 2025-06-01 03:02:24.426488 | orchestrator | 03:02:24.426 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-01 03:02:24.438756 | orchestrator | 03:02:24.438 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-01 03:02:24.443956 | orchestrator | 03:02:24.443 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-01 03:02:24.448322 | orchestrator | 03:02:24.448 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-01 03:02:24.450981 | orchestrator | 03:02:24.450 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-01 03:02:24.458502 | orchestrator | 03:02:24.458 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-01 03:02:24.461877 | orchestrator | 03:02:24.461 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-01 03:02:30.745788 | orchestrator | 03:02:30.745 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=adb360d0-b1e6-4009-802d-a3459cdfb108] 2025-06-01 03:02:30.757436 | orchestrator | 03:02:30.757 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-01 03:02:30.761983 | orchestrator | 03:02:30.761 STDOUT terraform: local_file.inventory: Creating... 2025-06-01 03:02:30.765418 | orchestrator | 03:02:30.765 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-01 03:02:30.767794 | orchestrator | 03:02:30.767 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=6651d287f09474f77e015eedcee9724038df1fc9] 2025-06-01 03:02:30.775419 | orchestrator | 03:02:30.775 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=68a94ac353165c8c2cae21a1867c181468193abf] 2025-06-01 03:02:31.454290 | orchestrator | 03:02:31.453 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=adb360d0-b1e6-4009-802d-a3459cdfb108] 2025-06-01 03:02:34.440276 | orchestrator | 03:02:34.439 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-01 03:02:34.445564 | orchestrator | 03:02:34.445 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-01 03:02:34.450063 | orchestrator | 03:02:34.449 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-01 03:02:34.453392 | orchestrator | 03:02:34.453 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-01 03:02:34.459620 | orchestrator | 03:02:34.459 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-01 03:02:34.462870 | orchestrator | 03:02:34.462 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-01 03:02:44.440750 | orchestrator | 03:02:44.440 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-01 03:02:44.445992 | orchestrator | 03:02:44.445 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-01 03:02:44.451243 | orchestrator | 03:02:44.450 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-01 03:02:44.454508 | orchestrator | 03:02:44.454 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-01 03:02:44.460869 | orchestrator | 03:02:44.460 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-01 03:02:44.463128 | orchestrator | 03:02:44.462 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-01 03:02:44.947704 | orchestrator | 03:02:44.947 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=974e7dff-36ef-4ce8-b8a8-a71d0494267c] 2025-06-01 03:02:44.974231 | orchestrator | 03:02:44.973 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=34055a82-ca4f-432a-936a-943d5172e52f] 2025-06-01 03:02:54.441116 | orchestrator | 03:02:54.440 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-01 03:02:54.446440 | orchestrator | 03:02:54.446 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-01 03:02:54.451631 | orchestrator | 03:02:54.451 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-01 03:02:54.461832 | orchestrator | 03:02:54.461 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-01 03:02:55.142828 | orchestrator | 03:02:55.142 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=7cbcc4e4-4b7f-4647-a800-2a8e18eb595f] 2025-06-01 03:02:55.340838 | orchestrator | 03:02:55.340 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=ee0ee9a5-a3cd-42bb-8e79-4e9d3796983c] 2025-06-01 03:02:55.370796 | orchestrator | 03:02:55.370 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=09a5754a-73d4-45be-9645-8320d0f4495d] 2025-06-01 03:02:55.574573 | orchestrator | 03:02:55.574 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=79139494-e920-4153-97bf-acd899602e7d] 2025-06-01 03:02:55.589104 | orchestrator | 03:02:55.588 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-01 03:02:55.599988 | orchestrator | 03:02:55.599 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3573200849546254206] 2025-06-01 03:02:55.604475 | orchestrator | 03:02:55.604 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-01 03:02:55.609270 | orchestrator | 03:02:55.608 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-01 03:02:55.609339 | orchestrator | 03:02:55.609 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-01 03:02:55.610858 | orchestrator | 03:02:55.610 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-01 03:02:55.614873 | orchestrator | 03:02:55.614 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-01 03:02:55.620276 | orchestrator | 03:02:55.620 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-01 03:02:55.634854 | orchestrator | 03:02:55.634 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-01 03:02:55.636436 | orchestrator | 03:02:55.636 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-01 03:02:55.648982 | orchestrator | 03:02:55.648 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-01 03:02:55.650971 | orchestrator | 03:02:55.650 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-01 03:03:00.953246 | orchestrator | 03:03:00.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=79139494-e920-4153-97bf-acd899602e7d/1fa93f47-9163-4651-815b-24671ddef110] 2025-06-01 03:03:00.956147 | orchestrator | 03:03:00.955 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=ee0ee9a5-a3cd-42bb-8e79-4e9d3796983c/88d52e43-2c9d-46e0-bf5e-2238e33d97a2] 2025-06-01 03:03:00.973867 | orchestrator | 03:03:00.973 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=7cbcc4e4-4b7f-4647-a800-2a8e18eb595f/eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9] 2025-06-01 03:03:00.995794 | orchestrator | 03:03:00.995 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=79139494-e920-4153-97bf-acd899602e7d/f8222133-3d15-437e-b81b-973910c5fe79] 2025-06-01 03:03:00.999783 | orchestrator | 03:03:00.999 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=ee0ee9a5-a3cd-42bb-8e79-4e9d3796983c/2bf032b4-821f-4153-a16b-c7c7b9690c3c] 2025-06-01 03:03:01.021325 | orchestrator | 03:03:01.020 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=7cbcc4e4-4b7f-4647-a800-2a8e18eb595f/5b466634-774d-43fb-b203-3068f5674087] 2025-06-01 03:03:01.044186 | orchestrator | 03:03:01.043 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=ee0ee9a5-a3cd-42bb-8e79-4e9d3796983c/48a1c260-3052-4e59-9db5-94630d6736af] 2025-06-01 03:03:01.046259 | orchestrator | 03:03:01.045 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=79139494-e920-4153-97bf-acd899602e7d/13757f92-d131-4fb2-97b0-30fa6d4a703c] 2025-06-01 03:03:01.077036 | orchestrator | 03:03:01.076 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=7cbcc4e4-4b7f-4647-a800-2a8e18eb595f/52cdef25-f5ea-459b-a3d2-6dc79872de85] 2025-06-01 03:03:05.652725 | orchestrator | 03:03:05.652 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-01 03:03:15.653718 | orchestrator | 03:03:15.653 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-01 03:03:16.073105 | orchestrator | 03:03:16.072 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=045d9c5e-3613-443d-b277-18e4b1f2fcf3] 2025-06-01 03:03:16.096055 | orchestrator | 03:03:16.095 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-01 03:03:16.096126 | orchestrator | 03:03:16.095 STDOUT terraform: Outputs: 2025-06-01 03:03:16.096145 | orchestrator | 03:03:16.095 STDOUT terraform: manager_address = 2025-06-01 03:03:16.096153 | orchestrator | 03:03:16.096 STDOUT terraform: private_key = 2025-06-01 03:03:16.425559 | orchestrator | ok: Runtime: 0:01:35.178855 2025-06-01 03:03:16.448856 | 2025-06-01 03:03:16.448982 | TASK [Create infrastructure (stable)] 2025-06-01 03:03:16.998193 | orchestrator | skipping: Conditional result was False 2025-06-01 03:03:17.007110 | 2025-06-01 03:03:17.007252 | TASK [Fetch manager address] 2025-06-01 03:03:17.517259 | orchestrator | ok 2025-06-01 03:03:17.524521 | 2025-06-01 03:03:17.524651 | TASK [Set manager_host address] 2025-06-01 03:03:17.586129 | orchestrator | ok 2025-06-01 03:03:17.595198 | 2025-06-01 03:03:17.595328 | LOOP [Update ansible collections] 2025-06-01 03:03:18.474348 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 03:03:18.474671 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-01 03:03:18.474715 | orchestrator | Starting galaxy collection install process 2025-06-01 03:03:18.474740 | orchestrator | Process install dependency map 2025-06-01 03:03:18.474761 | orchestrator | Starting collection install process 2025-06-01 03:03:18.474781 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-06-01 03:03:18.474806 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-06-01 03:03:18.474851 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-01 03:03:18.474900 | orchestrator | ok: Item: commons Runtime: 0:00:00.557268 2025-06-01 03:03:19.321663 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 03:03:19.321786 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-01 03:03:19.321817 | orchestrator | Starting galaxy collection install process 2025-06-01 03:03:19.321839 | orchestrator | Process install dependency map 2025-06-01 03:03:19.321860 | orchestrator | Starting collection install process 2025-06-01 03:03:19.321880 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-06-01 03:03:19.321899 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-06-01 03:03:19.321918 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-01 03:03:19.321949 | orchestrator | ok: Item: services Runtime: 0:00:00.607251 2025-06-01 03:03:19.331938 | 2025-06-01 03:03:19.332059 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-01 03:03:30.048316 | orchestrator | ok 2025-06-01 03:03:30.068763 | 2025-06-01 03:03:30.068902 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-01 03:04:30.129689 | orchestrator | ok 2025-06-01 03:04:30.135699 | 2025-06-01 03:04:30.135785 | TASK [Fetch manager ssh hostkey] 2025-06-01 03:04:31.748261 | orchestrator | Output suppressed because no_log was given 2025-06-01 03:04:31.753794 | 2025-06-01 03:04:31.753874 | TASK [Get ssh keypair from terraform environment] 2025-06-01 03:04:32.300887 | orchestrator | ok: Runtime: 0:00:00.009061 2025-06-01 03:04:32.308572 | 2025-06-01 03:04:32.308670 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-01 03:04:32.337201 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-01 03:04:32.360230 | 2025-06-01 03:04:32.360352 | TASK [Run manager part 0] 2025-06-01 03:04:33.559188 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 03:04:33.611545 | orchestrator | 2025-06-01 03:04:33.611597 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-01 03:04:33.611605 | orchestrator | 2025-06-01 03:04:33.611617 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-01 03:04:35.408605 | orchestrator | ok: [testbed-manager] 2025-06-01 03:04:35.408647 | orchestrator | 2025-06-01 03:04:35.408667 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-01 03:04:35.408677 | orchestrator | 2025-06-01 03:04:35.408685 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:04:37.318202 | orchestrator | ok: [testbed-manager] 2025-06-01 03:04:37.318257 | orchestrator | 2025-06-01 03:04:37.318264 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-01 03:04:38.003008 | orchestrator | ok: [testbed-manager] 2025-06-01 03:04:38.003166 | orchestrator | 2025-06-01 03:04:38.003186 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-01 03:04:38.063973 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.064029 | orchestrator | 2025-06-01 03:04:38.064038 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-01 03:04:38.087423 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.087462 | orchestrator | 2025-06-01 03:04:38.087470 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-01 03:04:38.108080 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.108118 | orchestrator | 2025-06-01 03:04:38.108123 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-01 03:04:38.133234 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.133278 | orchestrator | 2025-06-01 03:04:38.133287 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-01 03:04:38.168693 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.168743 | orchestrator | 2025-06-01 03:04:38.168754 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-01 03:04:38.206500 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.206561 | orchestrator | 2025-06-01 03:04:38.206579 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-01 03:04:38.242629 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:04:38.242676 | orchestrator | 2025-06-01 03:04:38.242684 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-01 03:04:39.041308 | orchestrator | changed: [testbed-manager] 2025-06-01 03:04:39.041372 | orchestrator | 2025-06-01 03:04:39.041382 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-01 03:07:40.078659 | orchestrator | changed: [testbed-manager] 2025-06-01 03:07:40.078730 | orchestrator | 2025-06-01 03:07:40.078748 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-01 03:08:53.220523 | orchestrator | changed: [testbed-manager] 2025-06-01 03:08:53.220626 | orchestrator | 2025-06-01 03:08:53.220643 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-01 03:09:16.006247 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:16.006336 | orchestrator | 2025-06-01 03:09:16.006353 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-01 03:09:24.804783 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:24.804878 | orchestrator | 2025-06-01 03:09:24.804897 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-01 03:09:24.852954 | orchestrator | ok: [testbed-manager] 2025-06-01 03:09:24.853030 | orchestrator | 2025-06-01 03:09:24.853045 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-01 03:09:25.646563 | orchestrator | ok: [testbed-manager] 2025-06-01 03:09:25.646652 | orchestrator | 2025-06-01 03:09:25.646670 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-01 03:09:26.389709 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:26.389821 | orchestrator | 2025-06-01 03:09:26.389838 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-01 03:09:32.793895 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:32.793976 | orchestrator | 2025-06-01 03:09:32.794006 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-01 03:09:38.729145 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:38.729242 | orchestrator | 2025-06-01 03:09:38.729262 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-01 03:09:41.348088 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:41.348178 | orchestrator | 2025-06-01 03:09:41.348195 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-01 03:09:43.123825 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:43.123941 | orchestrator | 2025-06-01 03:09:43.123967 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-01 03:09:44.217846 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-01 03:09:44.217945 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-01 03:09:44.217961 | orchestrator | 2025-06-01 03:09:44.217974 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-01 03:09:44.259958 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-01 03:09:44.260064 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-01 03:09:44.260081 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-01 03:09:44.260094 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-01 03:09:47.531216 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-01 03:09:47.531257 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-01 03:09:47.531264 | orchestrator | 2025-06-01 03:09:47.531271 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-01 03:09:48.099658 | orchestrator | changed: [testbed-manager] 2025-06-01 03:09:48.099779 | orchestrator | 2025-06-01 03:09:48.099798 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-01 03:13:09.317080 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-01 03:13:09.317132 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-01 03:13:09.317143 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-01 03:13:09.317150 | orchestrator | 2025-06-01 03:13:09.317158 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-01 03:13:11.630477 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-01 03:13:11.630603 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-01 03:13:11.630622 | orchestrator | 2025-06-01 03:13:11.630636 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-01 03:13:11.630648 | orchestrator | 2025-06-01 03:13:11.630660 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:13:12.973761 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:12.973797 | orchestrator | 2025-06-01 03:13:12.973805 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-01 03:13:13.021936 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:13.021977 | orchestrator | 2025-06-01 03:13:13.021986 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-01 03:13:13.085356 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:13.085399 | orchestrator | 2025-06-01 03:13:13.085409 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-01 03:13:13.840470 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:13.840511 | orchestrator | 2025-06-01 03:13:13.840520 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-01 03:13:14.614004 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:14.614072 | orchestrator | 2025-06-01 03:13:14.614081 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-01 03:13:15.979491 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-01 03:13:15.979610 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-01 03:13:15.979628 | orchestrator | 2025-06-01 03:13:15.979661 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-01 03:13:17.402889 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:17.402998 | orchestrator | 2025-06-01 03:13:17.403015 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-01 03:13:19.109786 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:13:19.109871 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-01 03:13:19.109885 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:13:19.109896 | orchestrator | 2025-06-01 03:13:19.109908 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-01 03:13:19.694350 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:19.694413 | orchestrator | 2025-06-01 03:13:19.694429 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-01 03:13:19.763108 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:19.763159 | orchestrator | 2025-06-01 03:13:19.763165 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-01 03:13:20.662235 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 03:13:20.662333 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:20.662350 | orchestrator | 2025-06-01 03:13:20.662363 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-01 03:13:20.700993 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:20.701061 | orchestrator | 2025-06-01 03:13:20.701072 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-01 03:13:20.735487 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:20.735581 | orchestrator | 2025-06-01 03:13:20.735595 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-01 03:13:20.771149 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:20.771220 | orchestrator | 2025-06-01 03:13:20.771236 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-01 03:13:20.818681 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:20.818742 | orchestrator | 2025-06-01 03:13:20.818757 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-01 03:13:21.506216 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:21.506307 | orchestrator | 2025-06-01 03:13:21.506324 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-01 03:13:21.506337 | orchestrator | 2025-06-01 03:13:21.506350 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:13:22.943410 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:22.943503 | orchestrator | 2025-06-01 03:13:22.943520 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-01 03:13:23.865870 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:23.865957 | orchestrator | 2025-06-01 03:13:23.865974 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:13:23.865988 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-01 03:13:23.866000 | orchestrator | 2025-06-01 03:13:24.269826 | orchestrator | ok: Runtime: 0:08:51.128954 2025-06-01 03:13:24.282650 | 2025-06-01 03:13:24.283246 | TASK [Point out that the log in on the manager is now possible] 2025-06-01 03:13:24.316303 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-01 03:13:24.325638 | 2025-06-01 03:13:24.325754 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-01 03:13:24.366585 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-01 03:13:24.373797 | 2025-06-01 03:13:24.373904 | TASK [Run manager part 1 + 2] 2025-06-01 03:13:25.201576 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 03:13:25.255496 | orchestrator | 2025-06-01 03:13:25.255543 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-01 03:13:25.255580 | orchestrator | 2025-06-01 03:13:25.255594 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:13:28.282610 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:28.282663 | orchestrator | 2025-06-01 03:13:28.282681 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-01 03:13:28.315836 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:28.315883 | orchestrator | 2025-06-01 03:13:28.315894 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-01 03:13:28.354993 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:28.355056 | orchestrator | 2025-06-01 03:13:28.355070 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 03:13:28.391125 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:28.391176 | orchestrator | 2025-06-01 03:13:28.391187 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 03:13:28.451488 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:28.451541 | orchestrator | 2025-06-01 03:13:28.451567 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 03:13:28.508832 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:28.508890 | orchestrator | 2025-06-01 03:13:28.508903 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 03:13:28.557684 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-01 03:13:28.557730 | orchestrator | 2025-06-01 03:13:28.557735 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 03:13:29.279732 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:29.279795 | orchestrator | 2025-06-01 03:13:29.279808 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 03:13:29.329928 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:29.329982 | orchestrator | 2025-06-01 03:13:29.329991 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 03:13:30.670748 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:30.670815 | orchestrator | 2025-06-01 03:13:30.670828 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 03:13:31.226153 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:31.226212 | orchestrator | 2025-06-01 03:13:31.226220 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 03:13:32.345106 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:32.345170 | orchestrator | 2025-06-01 03:13:32.345187 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 03:13:45.868793 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:45.868898 | orchestrator | 2025-06-01 03:13:45.868915 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-01 03:13:46.565180 | orchestrator | ok: [testbed-manager] 2025-06-01 03:13:46.565236 | orchestrator | 2025-06-01 03:13:46.565248 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-01 03:13:46.620254 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:13:46.620310 | orchestrator | 2025-06-01 03:13:46.620319 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-01 03:13:47.535530 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:47.535651 | orchestrator | 2025-06-01 03:13:47.535660 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-01 03:13:48.489896 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:48.489953 | orchestrator | 2025-06-01 03:13:48.489962 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-01 03:13:49.066926 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:49.066995 | orchestrator | 2025-06-01 03:13:49.067011 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-01 03:13:49.109057 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-01 03:13:49.109150 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-01 03:13:49.109166 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-01 03:13:49.109179 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-01 03:13:51.065887 | orchestrator | changed: [testbed-manager] 2025-06-01 03:13:51.065953 | orchestrator | 2025-06-01 03:13:51.065962 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-01 03:14:00.003972 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-01 03:14:00.004016 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-01 03:14:00.004027 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-01 03:14:00.004034 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-01 03:14:00.004046 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-01 03:14:00.004052 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-01 03:14:00.004059 | orchestrator | 2025-06-01 03:14:00.004066 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-01 03:14:01.043905 | orchestrator | changed: [testbed-manager] 2025-06-01 03:14:01.043945 | orchestrator | 2025-06-01 03:14:01.043953 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-01 03:14:01.087864 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:14:01.087906 | orchestrator | 2025-06-01 03:14:01.087915 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-01 03:14:04.113908 | orchestrator | changed: [testbed-manager] 2025-06-01 03:14:04.114012 | orchestrator | 2025-06-01 03:14:04.114078 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-01 03:14:04.155286 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:14:04.155366 | orchestrator | 2025-06-01 03:14:04.155380 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-01 03:15:43.972544 | orchestrator | changed: [testbed-manager] 2025-06-01 03:15:43.972641 | orchestrator | 2025-06-01 03:15:43.972660 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 03:15:45.064047 | orchestrator | ok: [testbed-manager] 2025-06-01 03:15:45.064086 | orchestrator | 2025-06-01 03:15:45.064093 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:15:45.064101 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-01 03:15:45.064107 | orchestrator | 2025-06-01 03:15:45.499782 | orchestrator | ok: Runtime: 0:02:20.489182 2025-06-01 03:15:45.508312 | 2025-06-01 03:15:45.508400 | TASK [Reboot manager] 2025-06-01 03:15:47.068896 | orchestrator | ok: Runtime: 0:00:00.968311 2025-06-01 03:15:47.076627 | 2025-06-01 03:15:47.076724 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-01 03:16:01.134176 | orchestrator | ok 2025-06-01 03:16:01.145661 | 2025-06-01 03:16:01.145794 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-01 03:17:01.190819 | orchestrator | ok 2025-06-01 03:17:01.202883 | 2025-06-01 03:17:01.203059 | TASK [Deploy manager + bootstrap nodes] 2025-06-01 03:17:03.668737 | orchestrator | 2025-06-01 03:17:03.668998 | orchestrator | # DEPLOY MANAGER 2025-06-01 03:17:03.669024 | orchestrator | 2025-06-01 03:17:03.669039 | orchestrator | + set -e 2025-06-01 03:17:03.669053 | orchestrator | + echo 2025-06-01 03:17:03.669067 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-01 03:17:03.669085 | orchestrator | + echo 2025-06-01 03:17:03.669135 | orchestrator | + cat /opt/manager-vars.sh 2025-06-01 03:17:03.671912 | orchestrator | export NUMBER_OF_NODES=6 2025-06-01 03:17:03.671939 | orchestrator | 2025-06-01 03:17:03.671951 | orchestrator | export CEPH_VERSION=reef 2025-06-01 03:17:03.671965 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-01 03:17:03.671977 | orchestrator | export MANAGER_VERSION=latest 2025-06-01 03:17:03.671999 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-01 03:17:03.672010 | orchestrator | 2025-06-01 03:17:03.672028 | orchestrator | export ARA=false 2025-06-01 03:17:03.672040 | orchestrator | export DEPLOY_MODE=manager 2025-06-01 03:17:03.672058 | orchestrator | export TEMPEST=true 2025-06-01 03:17:03.672069 | orchestrator | export IS_ZUUL=true 2025-06-01 03:17:03.672080 | orchestrator | 2025-06-01 03:17:03.672098 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:17:03.672110 | orchestrator | export EXTERNAL_API=false 2025-06-01 03:17:03.672121 | orchestrator | 2025-06-01 03:17:03.672132 | orchestrator | export IMAGE_USER=ubuntu 2025-06-01 03:17:03.672146 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-01 03:17:03.672157 | orchestrator | 2025-06-01 03:17:03.672168 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-01 03:17:03.672311 | orchestrator | 2025-06-01 03:17:03.672327 | orchestrator | + echo 2025-06-01 03:17:03.672345 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 03:17:03.673203 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 03:17:03.673221 | orchestrator | ++ INTERACTIVE=false 2025-06-01 03:17:03.673234 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 03:17:03.673248 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 03:17:03.673641 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 03:17:03.673660 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 03:17:03.673674 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 03:17:03.673693 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 03:17:03.673710 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 03:17:03.673722 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 03:17:03.673737 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 03:17:03.673760 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 03:17:03.673780 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 03:17:03.673798 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 03:17:03.673825 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 03:17:03.673840 | orchestrator | ++ export ARA=false 2025-06-01 03:17:03.673853 | orchestrator | ++ ARA=false 2025-06-01 03:17:03.673873 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 03:17:03.673905 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 03:17:03.673921 | orchestrator | ++ export TEMPEST=true 2025-06-01 03:17:03.673932 | orchestrator | ++ TEMPEST=true 2025-06-01 03:17:03.674065 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 03:17:03.674103 | orchestrator | ++ IS_ZUUL=true 2025-06-01 03:17:03.674118 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:17:03.674130 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:17:03.674141 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 03:17:03.674152 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 03:17:03.674171 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 03:17:03.674191 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 03:17:03.674214 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 03:17:03.674225 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 03:17:03.674237 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 03:17:03.674248 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 03:17:03.674270 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-01 03:17:03.731559 | orchestrator | + docker version 2025-06-01 03:17:03.980707 | orchestrator | Client: Docker Engine - Community 2025-06-01 03:17:03.980810 | orchestrator | Version: 27.5.1 2025-06-01 03:17:03.980826 | orchestrator | API version: 1.47 2025-06-01 03:17:03.980840 | orchestrator | Go version: go1.22.11 2025-06-01 03:17:03.980851 | orchestrator | Git commit: 9f9e405 2025-06-01 03:17:03.980862 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-01 03:17:03.980874 | orchestrator | OS/Arch: linux/amd64 2025-06-01 03:17:03.980885 | orchestrator | Context: default 2025-06-01 03:17:03.980897 | orchestrator | 2025-06-01 03:17:03.980908 | orchestrator | Server: Docker Engine - Community 2025-06-01 03:17:03.980919 | orchestrator | Engine: 2025-06-01 03:17:03.980930 | orchestrator | Version: 27.5.1 2025-06-01 03:17:03.980942 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-01 03:17:03.980985 | orchestrator | Go version: go1.22.11 2025-06-01 03:17:03.980996 | orchestrator | Git commit: 4c9b3b0 2025-06-01 03:17:03.981007 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-01 03:17:03.981018 | orchestrator | OS/Arch: linux/amd64 2025-06-01 03:17:03.981029 | orchestrator | Experimental: false 2025-06-01 03:17:03.981040 | orchestrator | containerd: 2025-06-01 03:17:03.981051 | orchestrator | Version: 1.7.27 2025-06-01 03:17:03.981075 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-01 03:17:03.981087 | orchestrator | runc: 2025-06-01 03:17:03.981099 | orchestrator | Version: 1.2.5 2025-06-01 03:17:03.981110 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-01 03:17:03.981121 | orchestrator | docker-init: 2025-06-01 03:17:03.981132 | orchestrator | Version: 0.19.0 2025-06-01 03:17:03.981144 | orchestrator | GitCommit: de40ad0 2025-06-01 03:17:03.984045 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-01 03:17:03.992184 | orchestrator | + set -e 2025-06-01 03:17:03.992208 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 03:17:03.992219 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 03:17:03.992231 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 03:17:03.992242 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 03:17:03.992259 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 03:17:03.992271 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 03:17:03.992282 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 03:17:03.992293 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 03:17:03.992303 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 03:17:03.992315 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 03:17:03.992325 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 03:17:03.992336 | orchestrator | ++ export ARA=false 2025-06-01 03:17:03.992347 | orchestrator | ++ ARA=false 2025-06-01 03:17:03.992358 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 03:17:03.992370 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 03:17:03.992381 | orchestrator | ++ export TEMPEST=true 2025-06-01 03:17:03.992392 | orchestrator | ++ TEMPEST=true 2025-06-01 03:17:03.992403 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 03:17:03.992414 | orchestrator | ++ IS_ZUUL=true 2025-06-01 03:17:03.992429 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:17:03.992440 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:17:03.992451 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 03:17:03.992462 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 03:17:03.992473 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 03:17:03.992484 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 03:17:03.992532 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 03:17:03.992546 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 03:17:03.992557 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 03:17:03.992568 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 03:17:03.992579 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 03:17:03.992590 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 03:17:03.992601 | orchestrator | ++ INTERACTIVE=false 2025-06-01 03:17:03.992612 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 03:17:03.992627 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 03:17:03.992643 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 03:17:03.992654 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 03:17:03.992666 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-01 03:17:03.998985 | orchestrator | + set -e 2025-06-01 03:17:03.999016 | orchestrator | + VERSION=reef 2025-06-01 03:17:03.999875 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-01 03:17:04.005683 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-01 03:17:04.005777 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-01 03:17:04.011990 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-01 03:17:04.018231 | orchestrator | + set -e 2025-06-01 03:17:04.018274 | orchestrator | + VERSION=2024.2 2025-06-01 03:17:04.018833 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-01 03:17:04.022168 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-01 03:17:04.022197 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-01 03:17:04.027918 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-01 03:17:04.028043 | orchestrator | ++ semver latest 7.0.0 2025-06-01 03:17:04.077647 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-01 03:17:04.077723 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 03:17:04.077739 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-01 03:17:04.077752 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-01 03:17:04.117317 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 03:17:04.119202 | orchestrator | + source /opt/venv/bin/activate 2025-06-01 03:17:04.120362 | orchestrator | ++ deactivate nondestructive 2025-06-01 03:17:04.120461 | orchestrator | ++ '[' -n '' ']' 2025-06-01 03:17:04.120476 | orchestrator | ++ '[' -n '' ']' 2025-06-01 03:17:04.120534 | orchestrator | ++ hash -r 2025-06-01 03:17:04.120547 | orchestrator | ++ '[' -n '' ']' 2025-06-01 03:17:04.120559 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-01 03:17:04.120570 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-01 03:17:04.120581 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-01 03:17:04.120605 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-01 03:17:04.120626 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-01 03:17:04.120638 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-01 03:17:04.120650 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-01 03:17:04.120667 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 03:17:04.120679 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 03:17:04.120690 | orchestrator | ++ export PATH 2025-06-01 03:17:04.120702 | orchestrator | ++ '[' -n '' ']' 2025-06-01 03:17:04.120717 | orchestrator | ++ '[' -z '' ']' 2025-06-01 03:17:04.120728 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-01 03:17:04.120739 | orchestrator | ++ PS1='(venv) ' 2025-06-01 03:17:04.120750 | orchestrator | ++ export PS1 2025-06-01 03:17:04.120761 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-01 03:17:04.120772 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-01 03:17:04.120783 | orchestrator | ++ hash -r 2025-06-01 03:17:04.120906 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-01 03:17:05.361036 | orchestrator | 2025-06-01 03:17:05.361136 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-01 03:17:05.361151 | orchestrator | 2025-06-01 03:17:05.361161 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 03:17:05.908005 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:05.908117 | orchestrator | 2025-06-01 03:17:05.908132 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-01 03:17:06.871255 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:06.871368 | orchestrator | 2025-06-01 03:17:06.871385 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-01 03:17:06.871399 | orchestrator | 2025-06-01 03:17:06.871410 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:17:09.253643 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:09.253760 | orchestrator | 2025-06-01 03:17:09.253779 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-01 03:17:09.300168 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:09.300244 | orchestrator | 2025-06-01 03:17:09.300261 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-01 03:17:09.771073 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:09.771188 | orchestrator | 2025-06-01 03:17:09.771205 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-01 03:17:09.811653 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:09.811749 | orchestrator | 2025-06-01 03:17:09.811768 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-01 03:17:10.157068 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:10.157172 | orchestrator | 2025-06-01 03:17:10.157187 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-01 03:17:10.211879 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:10.211958 | orchestrator | 2025-06-01 03:17:10.211972 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-01 03:17:10.564949 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:10.565052 | orchestrator | 2025-06-01 03:17:10.565067 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-01 03:17:10.681251 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:10.681337 | orchestrator | 2025-06-01 03:17:10.681350 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-01 03:17:10.681363 | orchestrator | 2025-06-01 03:17:10.681377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:17:12.439806 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:12.439942 | orchestrator | 2025-06-01 03:17:12.439969 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-01 03:17:12.553867 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-01 03:17:12.553980 | orchestrator | 2025-06-01 03:17:12.554005 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-01 03:17:12.607387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-01 03:17:12.607459 | orchestrator | 2025-06-01 03:17:12.607473 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-01 03:17:13.668819 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-01 03:17:13.668939 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-01 03:17:13.668962 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-01 03:17:13.668982 | orchestrator | 2025-06-01 03:17:13.669004 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-01 03:17:15.492000 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-01 03:17:15.492117 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-01 03:17:15.492135 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-01 03:17:15.492147 | orchestrator | 2025-06-01 03:17:15.492160 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-01 03:17:16.153821 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 03:17:16.153922 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:16.153939 | orchestrator | 2025-06-01 03:17:16.153952 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-01 03:17:16.800701 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 03:17:16.800812 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:16.800830 | orchestrator | 2025-06-01 03:17:16.800843 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-01 03:17:16.857441 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:16.857551 | orchestrator | 2025-06-01 03:17:16.857568 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-01 03:17:17.213481 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:17.213632 | orchestrator | 2025-06-01 03:17:17.213650 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-01 03:17:17.280168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-01 03:17:17.280230 | orchestrator | 2025-06-01 03:17:17.280243 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-01 03:17:18.325248 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:18.325352 | orchestrator | 2025-06-01 03:17:18.325367 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-01 03:17:19.190206 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:19.190311 | orchestrator | 2025-06-01 03:17:19.190328 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-01 03:17:31.079315 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:31.079439 | orchestrator | 2025-06-01 03:17:31.079457 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-01 03:17:31.128019 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:31.128112 | orchestrator | 2025-06-01 03:17:31.128128 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-01 03:17:31.128141 | orchestrator | 2025-06-01 03:17:31.128152 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:17:33.118070 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:33.118181 | orchestrator | 2025-06-01 03:17:33.118229 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-01 03:17:33.226946 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-01 03:17:33.227034 | orchestrator | 2025-06-01 03:17:33.227048 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-01 03:17:33.291661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 03:17:33.291718 | orchestrator | 2025-06-01 03:17:33.291731 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-01 03:17:35.878460 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:35.878602 | orchestrator | 2025-06-01 03:17:35.878621 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-01 03:17:35.936238 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:35.936372 | orchestrator | 2025-06-01 03:17:35.936414 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-01 03:17:36.069414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-01 03:17:36.069558 | orchestrator | 2025-06-01 03:17:36.069576 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-01 03:17:38.865035 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-01 03:17:38.865142 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-01 03:17:38.865156 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-01 03:17:38.865168 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-01 03:17:38.865178 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-01 03:17:38.865188 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-01 03:17:38.865198 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-01 03:17:38.865208 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-01 03:17:38.865219 | orchestrator | 2025-06-01 03:17:38.865230 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-01 03:17:39.503023 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:39.503117 | orchestrator | 2025-06-01 03:17:39.503130 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-01 03:17:40.158296 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:40.158399 | orchestrator | 2025-06-01 03:17:40.158416 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-01 03:17:40.244608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-01 03:17:40.244708 | orchestrator | 2025-06-01 03:17:40.244724 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-01 03:17:41.443234 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-01 03:17:41.443336 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-01 03:17:41.443348 | orchestrator | 2025-06-01 03:17:41.443356 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-01 03:17:42.071160 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:42.071265 | orchestrator | 2025-06-01 03:17:42.071281 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-01 03:17:42.131502 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:42.131586 | orchestrator | 2025-06-01 03:17:42.131598 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-01 03:17:42.207228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-01 03:17:42.207298 | orchestrator | 2025-06-01 03:17:42.207312 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-01 03:17:43.562886 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 03:17:43.562988 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 03:17:43.563003 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:43.563017 | orchestrator | 2025-06-01 03:17:43.563029 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-01 03:17:44.170609 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:44.170732 | orchestrator | 2025-06-01 03:17:44.170758 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-01 03:17:44.227005 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:44.227095 | orchestrator | 2025-06-01 03:17:44.227108 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-01 03:17:44.327018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-01 03:17:44.327109 | orchestrator | 2025-06-01 03:17:44.327121 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-01 03:17:44.838688 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:44.838792 | orchestrator | 2025-06-01 03:17:44.838809 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-01 03:17:45.243503 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:45.243647 | orchestrator | 2025-06-01 03:17:45.243661 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-01 03:17:46.482670 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-01 03:17:46.482792 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-01 03:17:46.482807 | orchestrator | 2025-06-01 03:17:46.482820 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-01 03:17:47.098094 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:47.098205 | orchestrator | 2025-06-01 03:17:47.098222 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-01 03:17:47.495912 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:47.496025 | orchestrator | 2025-06-01 03:17:47.496042 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-01 03:17:47.869169 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:47.869267 | orchestrator | 2025-06-01 03:17:47.869282 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-01 03:17:47.914118 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:47.914209 | orchestrator | 2025-06-01 03:17:47.914223 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-01 03:17:47.986953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-01 03:17:47.987038 | orchestrator | 2025-06-01 03:17:47.987052 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-01 03:17:48.029181 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:48.029253 | orchestrator | 2025-06-01 03:17:48.029270 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-01 03:17:50.049821 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-01 03:17:50.049927 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-01 03:17:50.049942 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-01 03:17:50.049955 | orchestrator | 2025-06-01 03:17:50.049968 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-01 03:17:50.742235 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:50.742333 | orchestrator | 2025-06-01 03:17:50.742348 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-01 03:17:51.410821 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:51.410927 | orchestrator | 2025-06-01 03:17:51.410944 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-01 03:17:52.135367 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:52.135468 | orchestrator | 2025-06-01 03:17:52.135484 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-01 03:17:52.209710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-01 03:17:52.209760 | orchestrator | 2025-06-01 03:17:52.209773 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-01 03:17:52.248844 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:52.248941 | orchestrator | 2025-06-01 03:17:52.248959 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-01 03:17:52.925791 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-01 03:17:52.925896 | orchestrator | 2025-06-01 03:17:52.925911 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-01 03:17:53.013496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-01 03:17:53.013616 | orchestrator | 2025-06-01 03:17:53.013630 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-01 03:17:53.720725 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:53.720831 | orchestrator | 2025-06-01 03:17:53.720847 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-01 03:17:54.323464 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:54.323621 | orchestrator | 2025-06-01 03:17:54.323640 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-01 03:17:54.381383 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:17:54.381476 | orchestrator | 2025-06-01 03:17:54.381490 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-01 03:17:54.431654 | orchestrator | ok: [testbed-manager] 2025-06-01 03:17:54.431743 | orchestrator | 2025-06-01 03:17:54.431757 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-01 03:17:55.234584 | orchestrator | changed: [testbed-manager] 2025-06-01 03:17:55.234686 | orchestrator | 2025-06-01 03:17:55.234699 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-01 03:18:59.753946 | orchestrator | changed: [testbed-manager] 2025-06-01 03:18:59.754117 | orchestrator | 2025-06-01 03:18:59.754137 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-01 03:19:00.797878 | orchestrator | ok: [testbed-manager] 2025-06-01 03:19:00.797980 | orchestrator | 2025-06-01 03:19:00.797995 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-01 03:19:00.859058 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:19:00.859123 | orchestrator | 2025-06-01 03:19:00.859137 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-01 03:19:03.598950 | orchestrator | changed: [testbed-manager] 2025-06-01 03:19:03.599055 | orchestrator | 2025-06-01 03:19:03.599073 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-01 03:19:03.650619 | orchestrator | ok: [testbed-manager] 2025-06-01 03:19:03.650698 | orchestrator | 2025-06-01 03:19:03.650714 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-01 03:19:03.650727 | orchestrator | 2025-06-01 03:19:03.650738 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-01 03:19:03.699037 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:19:03.699082 | orchestrator | 2025-06-01 03:19:03.699096 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-01 03:20:03.753400 | orchestrator | Pausing for 60 seconds 2025-06-01 03:20:03.753521 | orchestrator | changed: [testbed-manager] 2025-06-01 03:20:03.753601 | orchestrator | 2025-06-01 03:20:03.753614 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-01 03:20:07.939154 | orchestrator | changed: [testbed-manager] 2025-06-01 03:20:07.939266 | orchestrator | 2025-06-01 03:20:07.939283 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-01 03:20:49.684840 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-01 03:20:49.684960 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-01 03:20:49.684975 | orchestrator | changed: [testbed-manager] 2025-06-01 03:20:49.684988 | orchestrator | 2025-06-01 03:20:49.685000 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-01 03:20:58.164690 | orchestrator | changed: [testbed-manager] 2025-06-01 03:20:58.164837 | orchestrator | 2025-06-01 03:20:58.164856 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-01 03:20:58.237982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-01 03:20:58.238165 | orchestrator | 2025-06-01 03:20:58.238185 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-01 03:20:58.238198 | orchestrator | 2025-06-01 03:20:58.238209 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-01 03:20:58.284312 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:20:58.284391 | orchestrator | 2025-06-01 03:20:58.284405 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:20:58.284418 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-01 03:20:58.284429 | orchestrator | 2025-06-01 03:20:58.375468 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 03:20:58.375589 | orchestrator | + deactivate 2025-06-01 03:20:58.375604 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-01 03:20:58.375617 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 03:20:58.375628 | orchestrator | + export PATH 2025-06-01 03:20:58.375640 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-01 03:20:58.375652 | orchestrator | + '[' -n '' ']' 2025-06-01 03:20:58.375663 | orchestrator | + hash -r 2025-06-01 03:20:58.375674 | orchestrator | + '[' -n '' ']' 2025-06-01 03:20:58.375685 | orchestrator | + unset VIRTUAL_ENV 2025-06-01 03:20:58.375696 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-01 03:20:58.375731 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-01 03:20:58.375743 | orchestrator | + unset -f deactivate 2025-06-01 03:20:58.375754 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-01 03:20:58.382608 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 03:20:58.382632 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-01 03:20:58.382643 | orchestrator | + local max_attempts=60 2025-06-01 03:20:58.382654 | orchestrator | + local name=ceph-ansible 2025-06-01 03:20:58.382665 | orchestrator | + local attempt_num=1 2025-06-01 03:20:58.383541 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-01 03:20:58.420392 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 03:20:58.420424 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-01 03:20:58.420436 | orchestrator | + local max_attempts=60 2025-06-01 03:20:58.420447 | orchestrator | + local name=kolla-ansible 2025-06-01 03:20:58.420458 | orchestrator | + local attempt_num=1 2025-06-01 03:20:58.420629 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-01 03:20:58.462157 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 03:20:58.462218 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-01 03:20:58.462232 | orchestrator | + local max_attempts=60 2025-06-01 03:20:58.462244 | orchestrator | + local name=osism-ansible 2025-06-01 03:20:58.462255 | orchestrator | + local attempt_num=1 2025-06-01 03:20:58.462266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-01 03:20:58.499646 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 03:20:58.499704 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-01 03:20:58.499719 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-01 03:20:59.196603 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-01 03:20:59.396048 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-01 03:20:59.396150 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396166 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396178 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-01 03:20:59.396191 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-01 03:20:59.396234 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396247 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396258 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-01 03:20:59.396269 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396280 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-01 03:20:59.396291 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396301 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-01 03:20:59.396312 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396323 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396334 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.396345 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-01 03:20:59.407893 | orchestrator | ++ semver latest 7.0.0 2025-06-01 03:20:59.461700 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-01 03:20:59.461769 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 03:20:59.461782 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-01 03:20:59.466572 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-01 03:21:01.167369 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:21:01.167472 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:21:01.167487 | orchestrator | Registering Redlock._release_script 2025-06-01 03:21:01.346091 | orchestrator | 2025-06-01 03:21:01 | INFO  | Task 3e7a596a-cf18-4971-9a43-40bfa2a6479c (resolvconf) was prepared for execution. 2025-06-01 03:21:01.346185 | orchestrator | 2025-06-01 03:21:01 | INFO  | It takes a moment until task 3e7a596a-cf18-4971-9a43-40bfa2a6479c (resolvconf) has been started and output is visible here. 2025-06-01 03:21:05.313094 | orchestrator | 2025-06-01 03:21:05.313219 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-01 03:21:05.314186 | orchestrator | 2025-06-01 03:21:05.315038 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:21:05.317216 | orchestrator | Sunday 01 June 2025 03:21:05 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-06-01 03:21:08.944475 | orchestrator | ok: [testbed-manager] 2025-06-01 03:21:08.944857 | orchestrator | 2025-06-01 03:21:08.945700 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-01 03:21:08.946407 | orchestrator | Sunday 01 June 2025 03:21:08 +0000 (0:00:03.635) 0:00:03.782 *********** 2025-06-01 03:21:09.007684 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:09.008146 | orchestrator | 2025-06-01 03:21:09.010166 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-01 03:21:09.010805 | orchestrator | Sunday 01 June 2025 03:21:09 +0000 (0:00:00.063) 0:00:03.846 *********** 2025-06-01 03:21:09.093247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-01 03:21:09.093679 | orchestrator | 2025-06-01 03:21:09.094720 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-01 03:21:09.095656 | orchestrator | Sunday 01 June 2025 03:21:09 +0000 (0:00:00.084) 0:00:03.931 *********** 2025-06-01 03:21:09.162750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 03:21:09.163380 | orchestrator | 2025-06-01 03:21:09.164396 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-01 03:21:09.165409 | orchestrator | Sunday 01 June 2025 03:21:09 +0000 (0:00:00.070) 0:00:04.002 *********** 2025-06-01 03:21:10.179483 | orchestrator | ok: [testbed-manager] 2025-06-01 03:21:10.179635 | orchestrator | 2025-06-01 03:21:10.179652 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-01 03:21:10.179665 | orchestrator | Sunday 01 June 2025 03:21:10 +0000 (0:00:01.015) 0:00:05.017 *********** 2025-06-01 03:21:10.234203 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:10.235593 | orchestrator | 2025-06-01 03:21:10.235855 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-01 03:21:10.236755 | orchestrator | Sunday 01 June 2025 03:21:10 +0000 (0:00:00.055) 0:00:05.073 *********** 2025-06-01 03:21:10.699875 | orchestrator | ok: [testbed-manager] 2025-06-01 03:21:10.700097 | orchestrator | 2025-06-01 03:21:10.701205 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-01 03:21:10.702651 | orchestrator | Sunday 01 June 2025 03:21:10 +0000 (0:00:00.464) 0:00:05.537 *********** 2025-06-01 03:21:10.778692 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:10.779211 | orchestrator | 2025-06-01 03:21:10.779962 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-01 03:21:10.780985 | orchestrator | Sunday 01 June 2025 03:21:10 +0000 (0:00:00.079) 0:00:05.617 *********** 2025-06-01 03:21:11.284574 | orchestrator | changed: [testbed-manager] 2025-06-01 03:21:11.286775 | orchestrator | 2025-06-01 03:21:11.287411 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-01 03:21:11.288074 | orchestrator | Sunday 01 June 2025 03:21:11 +0000 (0:00:00.505) 0:00:06.123 *********** 2025-06-01 03:21:12.302746 | orchestrator | changed: [testbed-manager] 2025-06-01 03:21:12.303410 | orchestrator | 2025-06-01 03:21:12.304852 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-01 03:21:12.305431 | orchestrator | Sunday 01 June 2025 03:21:12 +0000 (0:00:01.016) 0:00:07.140 *********** 2025-06-01 03:21:13.207478 | orchestrator | ok: [testbed-manager] 2025-06-01 03:21:13.207615 | orchestrator | 2025-06-01 03:21:13.208944 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-01 03:21:13.209404 | orchestrator | Sunday 01 June 2025 03:21:13 +0000 (0:00:00.903) 0:00:08.043 *********** 2025-06-01 03:21:13.292454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-01 03:21:13.293247 | orchestrator | 2025-06-01 03:21:13.294163 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-01 03:21:13.294688 | orchestrator | Sunday 01 June 2025 03:21:13 +0000 (0:00:00.087) 0:00:08.131 *********** 2025-06-01 03:21:14.382130 | orchestrator | changed: [testbed-manager] 2025-06-01 03:21:14.383236 | orchestrator | 2025-06-01 03:21:14.384208 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:21:14.384451 | orchestrator | 2025-06-01 03:21:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:21:14.384588 | orchestrator | 2025-06-01 03:21:14 | INFO  | Please wait and do not abort execution. 2025-06-01 03:21:14.386010 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 03:21:14.387083 | orchestrator | 2025-06-01 03:21:14.387549 | orchestrator | 2025-06-01 03:21:14.388510 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:21:14.389806 | orchestrator | Sunday 01 June 2025 03:21:14 +0000 (0:00:01.088) 0:00:09.220 *********** 2025-06-01 03:21:14.389829 | orchestrator | =============================================================================== 2025-06-01 03:21:14.390673 | orchestrator | Gathering Facts --------------------------------------------------------- 3.64s 2025-06-01 03:21:14.391456 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2025-06-01 03:21:14.392216 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.02s 2025-06-01 03:21:14.393289 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.02s 2025-06-01 03:21:14.393703 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2025-06-01 03:21:14.394218 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-06-01 03:21:14.394822 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-06-01 03:21:14.395318 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-06-01 03:21:14.395815 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-01 03:21:14.396434 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-01 03:21:14.397816 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-06-01 03:21:14.398114 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-01 03:21:14.398966 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-01 03:21:14.808623 | orchestrator | + osism apply sshconfig 2025-06-01 03:21:16.419350 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:21:16.419450 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:21:16.419465 | orchestrator | Registering Redlock._release_script 2025-06-01 03:21:16.472796 | orchestrator | 2025-06-01 03:21:16 | INFO  | Task 40e4c837-ce08-4506-b60a-a44c72e3c1d4 (sshconfig) was prepared for execution. 2025-06-01 03:21:16.472857 | orchestrator | 2025-06-01 03:21:16 | INFO  | It takes a moment until task 40e4c837-ce08-4506-b60a-a44c72e3c1d4 (sshconfig) has been started and output is visible here. 2025-06-01 03:21:20.316892 | orchestrator | 2025-06-01 03:21:20.317136 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-01 03:21:20.317855 | orchestrator | 2025-06-01 03:21:20.318504 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-01 03:21:20.320213 | orchestrator | Sunday 01 June 2025 03:21:20 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-01 03:21:20.825303 | orchestrator | ok: [testbed-manager] 2025-06-01 03:21:20.825835 | orchestrator | 2025-06-01 03:21:20.825870 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-01 03:21:20.826499 | orchestrator | Sunday 01 June 2025 03:21:20 +0000 (0:00:00.510) 0:00:00.685 *********** 2025-06-01 03:21:21.321183 | orchestrator | changed: [testbed-manager] 2025-06-01 03:21:21.322718 | orchestrator | 2025-06-01 03:21:21.323720 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-01 03:21:21.324638 | orchestrator | Sunday 01 June 2025 03:21:21 +0000 (0:00:00.496) 0:00:01.182 *********** 2025-06-01 03:21:26.764364 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-01 03:21:26.764862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-01 03:21:26.766003 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-01 03:21:26.767032 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-01 03:21:26.767693 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-01 03:21:26.768441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-01 03:21:26.768674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-01 03:21:26.769179 | orchestrator | 2025-06-01 03:21:26.769663 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-01 03:21:26.770113 | orchestrator | Sunday 01 June 2025 03:21:26 +0000 (0:00:05.441) 0:00:06.624 *********** 2025-06-01 03:21:26.821273 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:26.821351 | orchestrator | 2025-06-01 03:21:26.822231 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-01 03:21:26.822937 | orchestrator | Sunday 01 June 2025 03:21:26 +0000 (0:00:00.058) 0:00:06.682 *********** 2025-06-01 03:21:27.368234 | orchestrator | changed: [testbed-manager] 2025-06-01 03:21:27.369643 | orchestrator | 2025-06-01 03:21:27.370817 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:21:27.370849 | orchestrator | 2025-06-01 03:21:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:21:27.370863 | orchestrator | 2025-06-01 03:21:27 | INFO  | Please wait and do not abort execution. 2025-06-01 03:21:27.371620 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:21:27.372204 | orchestrator | 2025-06-01 03:21:27.373066 | orchestrator | 2025-06-01 03:21:27.374123 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:21:27.375335 | orchestrator | Sunday 01 June 2025 03:21:27 +0000 (0:00:00.547) 0:00:07.230 *********** 2025-06-01 03:21:27.375824 | orchestrator | =============================================================================== 2025-06-01 03:21:27.376389 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.44s 2025-06-01 03:21:27.376933 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-06-01 03:21:27.377603 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2025-06-01 03:21:27.378390 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-06-01 03:21:27.379093 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-06-01 03:21:27.799759 | orchestrator | + osism apply known-hosts 2025-06-01 03:21:29.394382 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:21:29.394486 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:21:29.394502 | orchestrator | Registering Redlock._release_script 2025-06-01 03:21:29.447706 | orchestrator | 2025-06-01 03:21:29 | INFO  | Task de2a6cf5-ca6f-4bce-bdee-f0137e713f74 (known-hosts) was prepared for execution. 2025-06-01 03:21:29.447766 | orchestrator | 2025-06-01 03:21:29 | INFO  | It takes a moment until task de2a6cf5-ca6f-4bce-bdee-f0137e713f74 (known-hosts) has been started and output is visible here. 2025-06-01 03:21:33.273871 | orchestrator | 2025-06-01 03:21:33.274227 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-01 03:21:33.275347 | orchestrator | 2025-06-01 03:21:33.277301 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-01 03:21:33.278005 | orchestrator | Sunday 01 June 2025 03:21:33 +0000 (0:00:00.161) 0:00:00.161 *********** 2025-06-01 03:21:39.166515 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-01 03:21:39.166868 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-01 03:21:39.168276 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-01 03:21:39.170323 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-01 03:21:39.171079 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-01 03:21:39.172372 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-01 03:21:39.173193 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-01 03:21:39.174568 | orchestrator | 2025-06-01 03:21:39.175286 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-01 03:21:39.175963 | orchestrator | Sunday 01 June 2025 03:21:39 +0000 (0:00:05.892) 0:00:06.054 *********** 2025-06-01 03:21:39.341308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-01 03:21:39.342075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-01 03:21:39.342651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-01 03:21:39.343258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-01 03:21:39.344265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-01 03:21:39.345120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-01 03:21:39.345942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-01 03:21:39.346408 | orchestrator | 2025-06-01 03:21:39.346845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:39.347512 | orchestrator | Sunday 01 June 2025 03:21:39 +0000 (0:00:00.177) 0:00:06.231 *********** 2025-06-01 03:21:40.486122 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCBMMsJD01Zyv8uEOegL11mHmgUOQXr5Qw2n3xR2fz5c8QD3OOkb1L8C7k0NrZohf6t8n76SWG9fPj12HtLaMSg=) 2025-06-01 03:21:40.486215 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgLCK7qhgRVon+MxzHktQwoRyPFs1/fzDQlYuYQ8NJuVGXJJod7dx2HsILpF/A8lwOTWcXemMZzR3jJUzsOSONqJfI5STRICQs2lrYjOEi5NE2Y4mHOxLPoFKwjIqu2gGAwKpdfnwayigNGhzdCCY37bkpQe2asweXHiTad9E4Ku6wxlSLwNLy9aknrQX1bKnRaj34ZcA3Z3ykVmmQHxMvCXC74fr479rEtRgAdycSkYQmk6wHmjpqAAduLnpwQqXoPTOkImEck9QOhx+smS2uUPQGWHehN3IUrxUGwulogPIkjXzBedXsF+/XLJZyrEbLBcv8HWk5fpb6MqI/03/hjSCSMyRUd+ado8g+7S8mK40jv0KWc1gzwClb+bADgaceJMH03K/Xmn5gHt19RUqK49HMRNoiqo/GxLWnLX5n1+p1ukXYK2JPSELs7kbTRSUVccz/uD+IRK4CmTlXmMHuYGjsf8+McnZz/6UW43tVm5t6q194MziOguKS/KC5OZU=) 2025-06-01 03:21:40.486648 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+hBwkZ9DcOoaILGY+bBuMrbBg51lDPWfpxk53lEmKA) 2025-06-01 03:21:40.487458 | orchestrator | 2025-06-01 03:21:40.488118 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:40.488860 | orchestrator | Sunday 01 June 2025 03:21:40 +0000 (0:00:01.143) 0:00:07.374 *********** 2025-06-01 03:21:41.488571 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/FibvXSyijBiu6tLukOpxAEahJAdPXHvcKoUsRWNpMKIbXtNb8vNDj4oj5+1hZBvBTeofgpuYUW2M0oWGP5XNVUxDfuDTLmS/c4s9cpabltuPkkf4GjTKzr8qZetIE3EwnGVqqwhTpi5XG+9xFzP1CJI1UKjgTcCBQEkS9/GaylpbNiYhQdqob7G9aY7hOKeUQPHaB8ZCJUZFy5ViOg+uo7ZZV9FB2hCUIBzOgXcfN6BcXaZD7chi5TkY6SMFXfRyAqecQJ88JYZARWr6lwpQuKbF4AUqKDZ4bpWbP5n/42aIQt8LC6LTIXIZiimgTABe1LeaRNX/kPGnwoQKzRJjWLQRwayWvGDu//9FKYhPWljkSNQrNNpLm0gehNPOD2YZAEq8H11OEEWnjtqV1VA6L8DfvXuTFsVhhi7PLL7mt8Aher/OK8XeaOqABP7v8rKoOKnn/Dk2h1rFmhrWVLsbMdTioYDGEdSKOSwJenca9u3drAmSsgyvASVt/N1SGNk=) 2025-06-01 03:21:41.488819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGL2L0GWifDfqPZLun2CpFkSUd0DS2lwtdeQZRsE7h6TYb9RTo41cwJjXmEQy/nvsT5LXcwB4Uzlr9d9Kxj/uKA=) 2025-06-01 03:21:41.489887 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXiXhRSL8RNAjfOQdi10tR79V5AjkppX7KoY2f3JEoQ) 2025-06-01 03:21:41.490963 | orchestrator | 2025-06-01 03:21:41.491721 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:41.492463 | orchestrator | Sunday 01 June 2025 03:21:41 +0000 (0:00:01.003) 0:00:08.378 *********** 2025-06-01 03:21:42.519499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBChq3fy9bcT6EmBR/DTd2cxgIQGXagRzscIVubnJVgRdOjM4b6sC/piWR9wbTitXH8dEHHhQ/CbffmNueuD706A=) 2025-06-01 03:21:42.519670 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx585fE3CLr/LcOyHeBUDyD0BUknY6L4Zm3LH+/k6JYbjbzZxZvaYjz5oi+zJivCSiqcgVInzlXIHoEGOBW0HZgbVD6jndFYC2UISn07QLXsVYiCOB0bzw9YXM00WQ/nQgOBS9G8mmKOA3rImsoIRa0LXZcJNH3p+/ywXpvmJY6LDOda5bym5lsGZRFXxHvQuf6gy4cL0ZM6JqpKwsYrinD402mWGmgzpLXc4myDoF5f0CWSZIMZRt9HVvk97OdwxK1W5+SX3nkoOuHUVtiAROWod9f7FedrPz9Sb0z7/hGCOj3M3LTCM5M0u6HXGPDq4v6hJ4EBEkjWyNB+xtEGvIl0VDW3UI9fIBZGieUJ4gnIJ43l7wAep0QGj0IaaTiq3Q45m9SHC6KocFgXKkyGZsC/1OFgFUg2XQVOP5RKl0JXTs5JO9kTk832KsyDrrizYQIXZ5FaRQzyOvMZjeFDd5VlAL852Fbuv9bWCPFSMFfnivoTqd4/rCYq3NiUrEYNc=) 2025-06-01 03:21:42.520576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWa4f6D6e/RiqxEwlWzml2R/IuTO5vByg8yGp8XJEDu) 2025-06-01 03:21:42.521331 | orchestrator | 2025-06-01 03:21:42.521793 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:42.522462 | orchestrator | Sunday 01 June 2025 03:21:42 +0000 (0:00:01.029) 0:00:09.407 *********** 2025-06-01 03:21:43.505903 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC23urp8lF1sQtbGtQVU3bgKXJkjWqTeEoD+9reRj0FnmFieZunNzLQrF7u4feB5ViykujBRh018ThV5okI5hK4=) 2025-06-01 03:21:43.506871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDaJCE05iD1YFNyIkW7AHlebmgc/Swpn9hHv+NgdlvEX) 2025-06-01 03:21:43.506909 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9jCOUrtNz/pIfd+RvMbq/Asv1UY0WkExRwYMpWL5DaWFZu5BVDvUwTFzFlRRQBSYlrf3XUwwh1hMj4PxEYEtuHOcDCsGTsJwamZNyqeTcDMTvURPs2ex/hRKVnF2k1pvQIfTHsbVLvhKcK5i0TfMtZbom97ncKEIjv85iTT6qaVodXPjk5h1ugUvNQBVyk+5Oj8lLwi8/6o4LvXzZJIIXlxAWmoyxjMibHYIYFcF1e4+YpvrL/wKhjRyh3A80chnsYqtXPAQdsD/MsnrAlzXsKSe3dVDOwdDGnMrq8xtJBUgIkj7N7s+xttCOfb5XtE0tpmW99rR1Snt8W5lIPUCYt9wYtlRfN8aaeYCLbw1i8HNb1nNXql/PdLbniOKnsmq4L4Ekj50db5Ke6QRutuyDxZL7Dy92XEA1L0EOkLlJLN3ZzcrBhQxSlRvPdI+wozKpAqbfJtfRPPe8iOaSbsAQ5FuHlMOINaoxlKrW03it7a4NC5ZH5058BVAj4t9ovDE=) 2025-06-01 03:21:43.506926 | orchestrator | 2025-06-01 03:21:43.507146 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:43.507550 | orchestrator | Sunday 01 June 2025 03:21:43 +0000 (0:00:00.986) 0:00:10.394 *********** 2025-06-01 03:21:44.526643 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMyFODTLdf7GRri8FI4CrzuMt0tOlUScIHEGXpSlvPlD) 2025-06-01 03:21:44.527117 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIiZbPjN4rkxXNrjBnKLCOF9akoEIeO6KKlqt1PxlRf8uGXnQbbhZmUF3UcVPOg3pdJ8RbsvMivEnN0XFK4ZLPRsb9IReD8faXoMd6xFBJQN+rjW7y26P9/ZAOSWKv96LWYE7RKSfa5gMOOkaHvgYik5GvFadrPVSLqzDAGVGgzMltn5oJca5D1vsNcFaStZF7YbsuXtYXCEL6MviBYic5L14Tw1ouLuvK1cfcQlpZc3RA2Fy7fm+LcPqgq6z9Wbipy91G6jgeoQmQigOUUcsY8ycdrAD9cd23DilKhN7GN43Y/8WtHXucrAlLwtbr354tUmFs+zgvGb+ns58BbxTg4HYMBVjjPTQBL/l/eiSET3Oq9Gi1jqtV4cXgwXsSYu2Fqnb+RB0qVCHj3nig3FdEx+8EveZpyFYf+jc6eyyZmLDHL7PbC+wH4Xy8yqtytI0FNcfGkROIyFy/t3JlU0SnKz6L5GT6FIL0AIBAG4YnG18Q+ClMzfCZZcIH+Z41MCk=) 2025-06-01 03:21:44.528176 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjvC3X31kE9VcfZV6p75nuWun8CeV6emh2ma+mpgiEi02diUtxPD7PA3qE3tAt0Be13fnL9CekKdDCuyFnTTAw=) 2025-06-01 03:21:44.529191 | orchestrator | 2025-06-01 03:21:44.530248 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:44.530696 | orchestrator | Sunday 01 June 2025 03:21:44 +0000 (0:00:01.020) 0:00:11.415 *********** 2025-06-01 03:21:45.595127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXPm19S+FsmRJZWTKsC2tWrejpJ1pQxueldcQEBE/60) 2025-06-01 03:21:45.595234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDp1xLuPtmEFh0S3IOYfNkzFy5FWC3BuhzcnTIHw5PwBjjpXsvJBP6hakRejpHd9MX58Coj2ceuYTzWmxrBP9MorK7TGBAlI41sFe38WJ8xGD33Zktp9K0srXyIzMapf8MZtq76Mp0w5hCm0J3oQhpzVebpuqK7dm9rmLo7Lwu6jelYinG7BjX4I2uq+Yxyh3sO1nkpybCH5rIFlBVoJ+ByNgGQkreSe2mO/0Y3AGYF/kuyrRrpJBcAyprQCeECC+9dYXZpd9rsIdb7FiyaeCqi9gePLN34hMUQRdlkRd9/0iTnDaOzliVpBDZTeE93b4miO7jCwYo/BdGqNv4yiGFW5YOWmx+tHoLbcMAXhCsA3hFcDrxyjPwfrDoqZV5kyHJEuWqXZiq0vvxsutOWEufUyZYpeJkyKk96qTBpBXlO+tBHzzhP+noi6yF4WTerzDLNhpantqBUe/WO+dBBYVMaiNmGC3VTvx6Nfd+Y/rLc6wxCwvP1N52FwGj0t63blIU=) 2025-06-01 03:21:45.595275 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDkKTNkmOKR09R8L2wniJg7EghXr7GO1IeOjoyh8EFVuuEK77guvckIP3n6u2STVk1iZR5Ozdm4eFSSdlmLM6cY=) 2025-06-01 03:21:45.595429 | orchestrator | 2025-06-01 03:21:45.596649 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:45.597191 | orchestrator | Sunday 01 June 2025 03:21:45 +0000 (0:00:01.067) 0:00:12.482 *********** 2025-06-01 03:21:46.598351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZIubfBF92BIbJ5LVRZWF0xJERC/SYKvq/7Wjs6uohTbZY39FC1Yj0FdTJr54U8alr011P+AOHMnGCJCXXCJATkR5DY33US+8GS3OOs/aC1SNHqRpB/Qi1z1TUe17lQER+KLCCt9u+Ih+DWQF310MEssZ/obdNa0OJpvBjKbaCQJLnsNbP3C1D8ifsspAvuYFTTar0JQVWaX/bNamSXZ9Dl5krrqwbLIHQx34AtZeZiUyM0GJvoB1jdzCx8DTujS5CXpUwdOD5rboUF7btpk6lAcAwabLrSPH8UsoNpiE7YQahWta0WEF+OBU9zuUiATZAg5r2V3xQNp8tXaBgua4ZJXx9bGMVDh5azOx78xVogiD44X7iffXL5exIXbC0r1798G9y24BMnNaHGV6mIj21SRsjYQuHABaEe8SRJOkUbPO/N0YoXiUF18XHWYHvCzyhkaqlrLOKaALn9V5RSKmlwlsedzrBafuj3zUfnngr3Pjz5/JKzWEwCoIhxTY2Hos=) 2025-06-01 03:21:46.598831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGAm/3uQyidYEwdGXD9x3b7QPMP2lbVqLeS+mOZqsWhvuqINdUwm22K4nLOxgftVtXMBYq0oStk/8ecJmoouiYk=) 2025-06-01 03:21:46.599848 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtbT83XqxmReiYRC0jO8XQaoV5ysGaqH7Tyw1VSSoyb) 2025-06-01 03:21:46.600494 | orchestrator | 2025-06-01 03:21:46.601203 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-01 03:21:46.601775 | orchestrator | Sunday 01 June 2025 03:21:46 +0000 (0:00:01.005) 0:00:13.487 *********** 2025-06-01 03:21:51.803267 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-01 03:21:51.803649 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-01 03:21:51.803683 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-01 03:21:51.804573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-01 03:21:51.806298 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-01 03:21:51.806984 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-01 03:21:51.808000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-01 03:21:51.808342 | orchestrator | 2025-06-01 03:21:51.809189 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-01 03:21:51.809600 | orchestrator | Sunday 01 June 2025 03:21:51 +0000 (0:00:05.203) 0:00:18.691 *********** 2025-06-01 03:21:51.959228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-01 03:21:51.959319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-01 03:21:51.959374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-01 03:21:51.959466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-01 03:21:51.959744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-01 03:21:51.961726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-01 03:21:51.962791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-01 03:21:51.963642 | orchestrator | 2025-06-01 03:21:51.964273 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:51.965084 | orchestrator | Sunday 01 June 2025 03:21:51 +0000 (0:00:00.158) 0:00:18.849 *********** 2025-06-01 03:21:53.003070 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCBMMsJD01Zyv8uEOegL11mHmgUOQXr5Qw2n3xR2fz5c8QD3OOkb1L8C7k0NrZohf6t8n76SWG9fPj12HtLaMSg=) 2025-06-01 03:21:53.004265 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+hBwkZ9DcOoaILGY+bBuMrbBg51lDPWfpxk53lEmKA) 2025-06-01 03:21:53.004887 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgLCK7qhgRVon+MxzHktQwoRyPFs1/fzDQlYuYQ8NJuVGXJJod7dx2HsILpF/A8lwOTWcXemMZzR3jJUzsOSONqJfI5STRICQs2lrYjOEi5NE2Y4mHOxLPoFKwjIqu2gGAwKpdfnwayigNGhzdCCY37bkpQe2asweXHiTad9E4Ku6wxlSLwNLy9aknrQX1bKnRaj34ZcA3Z3ykVmmQHxMvCXC74fr479rEtRgAdycSkYQmk6wHmjpqAAduLnpwQqXoPTOkImEck9QOhx+smS2uUPQGWHehN3IUrxUGwulogPIkjXzBedXsF+/XLJZyrEbLBcv8HWk5fpb6MqI/03/hjSCSMyRUd+ado8g+7S8mK40jv0KWc1gzwClb+bADgaceJMH03K/Xmn5gHt19RUqK49HMRNoiqo/GxLWnLX5n1+p1ukXYK2JPSELs7kbTRSUVccz/uD+IRK4CmTlXmMHuYGjsf8+McnZz/6UW43tVm5t6q194MziOguKS/KC5OZU=) 2025-06-01 03:21:53.005689 | orchestrator | 2025-06-01 03:21:53.006427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:53.007178 | orchestrator | Sunday 01 June 2025 03:21:52 +0000 (0:00:01.041) 0:00:19.891 *********** 2025-06-01 03:21:54.001062 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/FibvXSyijBiu6tLukOpxAEahJAdPXHvcKoUsRWNpMKIbXtNb8vNDj4oj5+1hZBvBTeofgpuYUW2M0oWGP5XNVUxDfuDTLmS/c4s9cpabltuPkkf4GjTKzr8qZetIE3EwnGVqqwhTpi5XG+9xFzP1CJI1UKjgTcCBQEkS9/GaylpbNiYhQdqob7G9aY7hOKeUQPHaB8ZCJUZFy5ViOg+uo7ZZV9FB2hCUIBzOgXcfN6BcXaZD7chi5TkY6SMFXfRyAqecQJ88JYZARWr6lwpQuKbF4AUqKDZ4bpWbP5n/42aIQt8LC6LTIXIZiimgTABe1LeaRNX/kPGnwoQKzRJjWLQRwayWvGDu//9FKYhPWljkSNQrNNpLm0gehNPOD2YZAEq8H11OEEWnjtqV1VA6L8DfvXuTFsVhhi7PLL7mt8Aher/OK8XeaOqABP7v8rKoOKnn/Dk2h1rFmhrWVLsbMdTioYDGEdSKOSwJenca9u3drAmSsgyvASVt/N1SGNk=) 2025-06-01 03:21:54.003082 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGL2L0GWifDfqPZLun2CpFkSUd0DS2lwtdeQZRsE7h6TYb9RTo41cwJjXmEQy/nvsT5LXcwB4Uzlr9d9Kxj/uKA=) 2025-06-01 03:21:54.003130 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXiXhRSL8RNAjfOQdi10tR79V5AjkppX7KoY2f3JEoQ) 2025-06-01 03:21:54.003438 | orchestrator | 2025-06-01 03:21:54.005030 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:54.005063 | orchestrator | Sunday 01 June 2025 03:21:53 +0000 (0:00:00.999) 0:00:20.890 *********** 2025-06-01 03:21:55.021013 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWa4f6D6e/RiqxEwlWzml2R/IuTO5vByg8yGp8XJEDu) 2025-06-01 03:21:55.021918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx585fE3CLr/LcOyHeBUDyD0BUknY6L4Zm3LH+/k6JYbjbzZxZvaYjz5oi+zJivCSiqcgVInzlXIHoEGOBW0HZgbVD6jndFYC2UISn07QLXsVYiCOB0bzw9YXM00WQ/nQgOBS9G8mmKOA3rImsoIRa0LXZcJNH3p+/ywXpvmJY6LDOda5bym5lsGZRFXxHvQuf6gy4cL0ZM6JqpKwsYrinD402mWGmgzpLXc4myDoF5f0CWSZIMZRt9HVvk97OdwxK1W5+SX3nkoOuHUVtiAROWod9f7FedrPz9Sb0z7/hGCOj3M3LTCM5M0u6HXGPDq4v6hJ4EBEkjWyNB+xtEGvIl0VDW3UI9fIBZGieUJ4gnIJ43l7wAep0QGj0IaaTiq3Q45m9SHC6KocFgXKkyGZsC/1OFgFUg2XQVOP5RKl0JXTs5JO9kTk832KsyDrrizYQIXZ5FaRQzyOvMZjeFDd5VlAL852Fbuv9bWCPFSMFfnivoTqd4/rCYq3NiUrEYNc=) 2025-06-01 03:21:55.021955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBChq3fy9bcT6EmBR/DTd2cxgIQGXagRzscIVubnJVgRdOjM4b6sC/piWR9wbTitXH8dEHHhQ/CbffmNueuD706A=) 2025-06-01 03:21:55.021970 | orchestrator | 2025-06-01 03:21:55.021994 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:55.022006 | orchestrator | Sunday 01 June 2025 03:21:55 +0000 (0:00:01.019) 0:00:21.909 *********** 2025-06-01 03:21:56.009597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDaJCE05iD1YFNyIkW7AHlebmgc/Swpn9hHv+NgdlvEX) 2025-06-01 03:21:56.012218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9jCOUrtNz/pIfd+RvMbq/Asv1UY0WkExRwYMpWL5DaWFZu5BVDvUwTFzFlRRQBSYlrf3XUwwh1hMj4PxEYEtuHOcDCsGTsJwamZNyqeTcDMTvURPs2ex/hRKVnF2k1pvQIfTHsbVLvhKcK5i0TfMtZbom97ncKEIjv85iTT6qaVodXPjk5h1ugUvNQBVyk+5Oj8lLwi8/6o4LvXzZJIIXlxAWmoyxjMibHYIYFcF1e4+YpvrL/wKhjRyh3A80chnsYqtXPAQdsD/MsnrAlzXsKSe3dVDOwdDGnMrq8xtJBUgIkj7N7s+xttCOfb5XtE0tpmW99rR1Snt8W5lIPUCYt9wYtlRfN8aaeYCLbw1i8HNb1nNXql/PdLbniOKnsmq4L4Ekj50db5Ke6QRutuyDxZL7Dy92XEA1L0EOkLlJLN3ZzcrBhQxSlRvPdI+wozKpAqbfJtfRPPe8iOaSbsAQ5FuHlMOINaoxlKrW03it7a4NC5ZH5058BVAj4t9ovDE=) 2025-06-01 03:21:56.012255 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC23urp8lF1sQtbGtQVU3bgKXJkjWqTeEoD+9reRj0FnmFieZunNzLQrF7u4feB5ViykujBRh018ThV5okI5hK4=) 2025-06-01 03:21:56.012811 | orchestrator | 2025-06-01 03:21:56.013495 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:56.014186 | orchestrator | Sunday 01 June 2025 03:21:56 +0000 (0:00:00.988) 0:00:22.898 *********** 2025-06-01 03:21:57.056020 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjvC3X31kE9VcfZV6p75nuWun8CeV6emh2ma+mpgiEi02diUtxPD7PA3qE3tAt0Be13fnL9CekKdDCuyFnTTAw=) 2025-06-01 03:21:57.056163 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIiZbPjN4rkxXNrjBnKLCOF9akoEIeO6KKlqt1PxlRf8uGXnQbbhZmUF3UcVPOg3pdJ8RbsvMivEnN0XFK4ZLPRsb9IReD8faXoMd6xFBJQN+rjW7y26P9/ZAOSWKv96LWYE7RKSfa5gMOOkaHvgYik5GvFadrPVSLqzDAGVGgzMltn5oJca5D1vsNcFaStZF7YbsuXtYXCEL6MviBYic5L14Tw1ouLuvK1cfcQlpZc3RA2Fy7fm+LcPqgq6z9Wbipy91G6jgeoQmQigOUUcsY8ycdrAD9cd23DilKhN7GN43Y/8WtHXucrAlLwtbr354tUmFs+zgvGb+ns58BbxTg4HYMBVjjPTQBL/l/eiSET3Oq9Gi1jqtV4cXgwXsSYu2Fqnb+RB0qVCHj3nig3FdEx+8EveZpyFYf+jc6eyyZmLDHL7PbC+wH4Xy8yqtytI0FNcfGkROIyFy/t3JlU0SnKz6L5GT6FIL0AIBAG4YnG18Q+ClMzfCZZcIH+Z41MCk=) 2025-06-01 03:21:57.056317 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMyFODTLdf7GRri8FI4CrzuMt0tOlUScIHEGXpSlvPlD) 2025-06-01 03:21:57.056337 | orchestrator | 2025-06-01 03:21:57.056557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:57.057404 | orchestrator | Sunday 01 June 2025 03:21:57 +0000 (0:00:01.043) 0:00:23.941 *********** 2025-06-01 03:21:58.095680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXPm19S+FsmRJZWTKsC2tWrejpJ1pQxueldcQEBE/60) 2025-06-01 03:21:58.096168 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDp1xLuPtmEFh0S3IOYfNkzFy5FWC3BuhzcnTIHw5PwBjjpXsvJBP6hakRejpHd9MX58Coj2ceuYTzWmxrBP9MorK7TGBAlI41sFe38WJ8xGD33Zktp9K0srXyIzMapf8MZtq76Mp0w5hCm0J3oQhpzVebpuqK7dm9rmLo7Lwu6jelYinG7BjX4I2uq+Yxyh3sO1nkpybCH5rIFlBVoJ+ByNgGQkreSe2mO/0Y3AGYF/kuyrRrpJBcAyprQCeECC+9dYXZpd9rsIdb7FiyaeCqi9gePLN34hMUQRdlkRd9/0iTnDaOzliVpBDZTeE93b4miO7jCwYo/BdGqNv4yiGFW5YOWmx+tHoLbcMAXhCsA3hFcDrxyjPwfrDoqZV5kyHJEuWqXZiq0vvxsutOWEufUyZYpeJkyKk96qTBpBXlO+tBHzzhP+noi6yF4WTerzDLNhpantqBUe/WO+dBBYVMaiNmGC3VTvx6Nfd+Y/rLc6wxCwvP1N52FwGj0t63blIU=) 2025-06-01 03:21:58.096973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDkKTNkmOKR09R8L2wniJg7EghXr7GO1IeOjoyh8EFVuuEK77guvckIP3n6u2STVk1iZR5Ozdm4eFSSdlmLM6cY=) 2025-06-01 03:21:58.097748 | orchestrator | 2025-06-01 03:21:58.098188 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 03:21:58.098966 | orchestrator | Sunday 01 June 2025 03:21:58 +0000 (0:00:01.039) 0:00:24.981 *********** 2025-06-01 03:21:59.107683 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZIubfBF92BIbJ5LVRZWF0xJERC/SYKvq/7Wjs6uohTbZY39FC1Yj0FdTJr54U8alr011P+AOHMnGCJCXXCJATkR5DY33US+8GS3OOs/aC1SNHqRpB/Qi1z1TUe17lQER+KLCCt9u+Ih+DWQF310MEssZ/obdNa0OJpvBjKbaCQJLnsNbP3C1D8ifsspAvuYFTTar0JQVWaX/bNamSXZ9Dl5krrqwbLIHQx34AtZeZiUyM0GJvoB1jdzCx8DTujS5CXpUwdOD5rboUF7btpk6lAcAwabLrSPH8UsoNpiE7YQahWta0WEF+OBU9zuUiATZAg5r2V3xQNp8tXaBgua4ZJXx9bGMVDh5azOx78xVogiD44X7iffXL5exIXbC0r1798G9y24BMnNaHGV6mIj21SRsjYQuHABaEe8SRJOkUbPO/N0YoXiUF18XHWYHvCzyhkaqlrLOKaALn9V5RSKmlwlsedzrBafuj3zUfnngr3Pjz5/JKzWEwCoIhxTY2Hos=) 2025-06-01 03:21:59.107862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGAm/3uQyidYEwdGXD9x3b7QPMP2lbVqLeS+mOZqsWhvuqINdUwm22K4nLOxgftVtXMBYq0oStk/8ecJmoouiYk=) 2025-06-01 03:21:59.108563 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtbT83XqxmReiYRC0jO8XQaoV5ysGaqH7Tyw1VSSoyb) 2025-06-01 03:21:59.109890 | orchestrator | 2025-06-01 03:21:59.110737 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-01 03:21:59.111616 | orchestrator | Sunday 01 June 2025 03:21:59 +0000 (0:00:01.015) 0:00:25.996 *********** 2025-06-01 03:21:59.258210 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-01 03:21:59.260144 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-01 03:21:59.260194 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-01 03:21:59.260216 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-01 03:21:59.260963 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 03:21:59.261428 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-01 03:21:59.262085 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-01 03:21:59.262372 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:59.263125 | orchestrator | 2025-06-01 03:21:59.263456 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-01 03:21:59.264013 | orchestrator | Sunday 01 June 2025 03:21:59 +0000 (0:00:00.151) 0:00:26.148 *********** 2025-06-01 03:21:59.309253 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:59.309330 | orchestrator | 2025-06-01 03:21:59.309823 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-01 03:21:59.309993 | orchestrator | Sunday 01 June 2025 03:21:59 +0000 (0:00:00.051) 0:00:26.199 *********** 2025-06-01 03:21:59.359884 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:21:59.361752 | orchestrator | 2025-06-01 03:21:59.362008 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-01 03:21:59.363224 | orchestrator | Sunday 01 June 2025 03:21:59 +0000 (0:00:00.050) 0:00:26.250 *********** 2025-06-01 03:21:59.980299 | orchestrator | changed: [testbed-manager] 2025-06-01 03:21:59.980400 | orchestrator | 2025-06-01 03:21:59.981094 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:21:59.981247 | orchestrator | 2025-06-01 03:21:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:21:59.981572 | orchestrator | 2025-06-01 03:21:59 | INFO  | Please wait and do not abort execution. 2025-06-01 03:21:59.982514 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 03:21:59.983340 | orchestrator | 2025-06-01 03:21:59.983499 | orchestrator | 2025-06-01 03:21:59.984139 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:21:59.984743 | orchestrator | Sunday 01 June 2025 03:21:59 +0000 (0:00:00.619) 0:00:26.870 *********** 2025-06-01 03:21:59.985129 | orchestrator | =============================================================================== 2025-06-01 03:21:59.985703 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.89s 2025-06-01 03:21:59.986333 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2025-06-01 03:21:59.987153 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-06-01 03:21:59.987685 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 03:21:59.989289 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-01 03:21:59.989395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-01 03:21:59.989500 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-01 03:21:59.989547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-01 03:21:59.989935 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-01 03:21:59.990491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-01 03:21:59.990620 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-01 03:21:59.990922 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-01 03:21:59.991180 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-01 03:21:59.991588 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-01 03:21:59.991900 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-01 03:21:59.993012 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-01 03:21:59.993323 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.62s 2025-06-01 03:21:59.993680 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-06-01 03:21:59.994086 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-01 03:21:59.994381 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-06-01 03:22:00.500814 | orchestrator | + osism apply squid 2025-06-01 03:22:02.154925 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:22:02.155048 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:22:02.155064 | orchestrator | Registering Redlock._release_script 2025-06-01 03:22:02.212392 | orchestrator | 2025-06-01 03:22:02 | INFO  | Task 259ec288-29e6-4acb-ad88-3d5a4de23f7f (squid) was prepared for execution. 2025-06-01 03:22:02.212456 | orchestrator | 2025-06-01 03:22:02 | INFO  | It takes a moment until task 259ec288-29e6-4acb-ad88-3d5a4de23f7f (squid) has been started and output is visible here. 2025-06-01 03:22:06.264479 | orchestrator | 2025-06-01 03:22:06.264742 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-01 03:22:06.266003 | orchestrator | 2025-06-01 03:22:06.267365 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-01 03:22:06.268417 | orchestrator | Sunday 01 June 2025 03:22:06 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-01 03:22:06.356197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 03:22:06.356375 | orchestrator | 2025-06-01 03:22:06.356800 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-01 03:22:06.357692 | orchestrator | Sunday 01 June 2025 03:22:06 +0000 (0:00:00.096) 0:00:00.271 *********** 2025-06-01 03:22:07.698304 | orchestrator | ok: [testbed-manager] 2025-06-01 03:22:07.699394 | orchestrator | 2025-06-01 03:22:07.700496 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-01 03:22:07.701494 | orchestrator | Sunday 01 June 2025 03:22:07 +0000 (0:00:01.340) 0:00:01.611 *********** 2025-06-01 03:22:08.848040 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-01 03:22:08.848145 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-01 03:22:08.848163 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-01 03:22:08.848242 | orchestrator | 2025-06-01 03:22:08.848692 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-01 03:22:08.849203 | orchestrator | Sunday 01 June 2025 03:22:08 +0000 (0:00:01.146) 0:00:02.758 *********** 2025-06-01 03:22:09.902766 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-01 03:22:09.902874 | orchestrator | 2025-06-01 03:22:09.902891 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-01 03:22:09.902976 | orchestrator | Sunday 01 June 2025 03:22:09 +0000 (0:00:01.057) 0:00:03.815 *********** 2025-06-01 03:22:10.240944 | orchestrator | ok: [testbed-manager] 2025-06-01 03:22:10.241622 | orchestrator | 2025-06-01 03:22:10.242109 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-01 03:22:10.242791 | orchestrator | Sunday 01 June 2025 03:22:10 +0000 (0:00:00.340) 0:00:04.155 *********** 2025-06-01 03:22:11.166315 | orchestrator | changed: [testbed-manager] 2025-06-01 03:22:11.166426 | orchestrator | 2025-06-01 03:22:11.166680 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-01 03:22:11.167539 | orchestrator | Sunday 01 June 2025 03:22:11 +0000 (0:00:00.922) 0:00:05.078 *********** 2025-06-01 03:22:43.087585 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-01 03:22:43.087703 | orchestrator | ok: [testbed-manager] 2025-06-01 03:22:43.087721 | orchestrator | 2025-06-01 03:22:43.087735 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-01 03:22:43.087875 | orchestrator | Sunday 01 June 2025 03:22:43 +0000 (0:00:31.916) 0:00:36.995 *********** 2025-06-01 03:22:55.537646 | orchestrator | changed: [testbed-manager] 2025-06-01 03:22:55.537770 | orchestrator | 2025-06-01 03:22:55.537788 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-01 03:22:55.537801 | orchestrator | Sunday 01 June 2025 03:22:55 +0000 (0:00:12.451) 0:00:49.447 *********** 2025-06-01 03:23:55.606843 | orchestrator | Pausing for 60 seconds 2025-06-01 03:23:55.606967 | orchestrator | changed: [testbed-manager] 2025-06-01 03:23:55.606984 | orchestrator | 2025-06-01 03:23:55.606997 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-01 03:23:55.607038 | orchestrator | Sunday 01 June 2025 03:23:55 +0000 (0:01:00.068) 0:01:49.515 *********** 2025-06-01 03:23:55.669014 | orchestrator | ok: [testbed-manager] 2025-06-01 03:23:55.669830 | orchestrator | 2025-06-01 03:23:55.670706 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-01 03:23:55.672001 | orchestrator | Sunday 01 June 2025 03:23:55 +0000 (0:00:00.067) 0:01:49.582 *********** 2025-06-01 03:23:56.251377 | orchestrator | changed: [testbed-manager] 2025-06-01 03:23:56.251609 | orchestrator | 2025-06-01 03:23:56.251673 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:23:56.252709 | orchestrator | 2025-06-01 03:23:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:23:56.252752 | orchestrator | 2025-06-01 03:23:56 | INFO  | Please wait and do not abort execution. 2025-06-01 03:23:56.254269 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:23:56.254374 | orchestrator | 2025-06-01 03:23:56.254390 | orchestrator | 2025-06-01 03:23:56.254403 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:23:56.255343 | orchestrator | Sunday 01 June 2025 03:23:56 +0000 (0:00:00.584) 0:01:50.167 *********** 2025-06-01 03:23:56.255537 | orchestrator | =============================================================================== 2025-06-01 03:23:56.256657 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-01 03:23:56.256679 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.92s 2025-06-01 03:23:56.257189 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.45s 2025-06-01 03:23:56.257443 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.34s 2025-06-01 03:23:56.257854 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-06-01 03:23:56.258476 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-06-01 03:23:56.258863 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-06-01 03:23:56.259531 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2025-06-01 03:23:56.259699 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-06-01 03:23:56.260353 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-06-01 03:23:56.260615 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-01 03:23:56.712009 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 03:23:56.712119 | orchestrator | ++ semver latest 9.0.0 2025-06-01 03:23:56.757710 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-01 03:23:56.757803 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 03:23:56.758074 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-01 03:23:58.408826 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:23:58.408929 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:23:58.408945 | orchestrator | Registering Redlock._release_script 2025-06-01 03:23:58.466345 | orchestrator | 2025-06-01 03:23:58 | INFO  | Task eb397222-af1a-4c9f-9366-f8b3e08ff67b (operator) was prepared for execution. 2025-06-01 03:23:58.466441 | orchestrator | 2025-06-01 03:23:58 | INFO  | It takes a moment until task eb397222-af1a-4c9f-9366-f8b3e08ff67b (operator) has been started and output is visible here. 2025-06-01 03:24:02.270479 | orchestrator | 2025-06-01 03:24:02.271098 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-01 03:24:02.272020 | orchestrator | 2025-06-01 03:24:02.272857 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 03:24:02.274011 | orchestrator | Sunday 01 June 2025 03:24:02 +0000 (0:00:00.143) 0:00:00.143 *********** 2025-06-01 03:24:06.497265 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:24:06.497413 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:24:06.497491 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:06.497981 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:06.500682 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:24:06.501778 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:06.502575 | orchestrator | 2025-06-01 03:24:06.502918 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-01 03:24:06.503901 | orchestrator | Sunday 01 June 2025 03:24:06 +0000 (0:00:04.229) 0:00:04.373 *********** 2025-06-01 03:24:08.222413 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:08.223774 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:24:08.226211 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:24:08.226274 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:08.226281 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:24:08.227492 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:08.228394 | orchestrator | 2025-06-01 03:24:08.229567 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-01 03:24:08.230458 | orchestrator | 2025-06-01 03:24:08.231174 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-01 03:24:08.232064 | orchestrator | Sunday 01 June 2025 03:24:08 +0000 (0:00:01.725) 0:00:06.098 *********** 2025-06-01 03:24:08.288076 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:24:08.309985 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:24:08.331867 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:24:08.376602 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:08.377653 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:08.378156 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:08.379099 | orchestrator | 2025-06-01 03:24:08.379242 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-01 03:24:08.379827 | orchestrator | Sunday 01 June 2025 03:24:08 +0000 (0:00:00.156) 0:00:06.255 *********** 2025-06-01 03:24:08.446603 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:24:08.506168 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:24:08.545085 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:24:08.545452 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:08.546931 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:08.547461 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:08.548075 | orchestrator | 2025-06-01 03:24:08.548761 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-01 03:24:08.549394 | orchestrator | Sunday 01 June 2025 03:24:08 +0000 (0:00:00.167) 0:00:06.423 *********** 2025-06-01 03:24:09.134206 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:09.134319 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:09.134334 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:09.134416 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:09.135036 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:09.135273 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:09.136282 | orchestrator | 2025-06-01 03:24:09.136780 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-01 03:24:09.137301 | orchestrator | Sunday 01 June 2025 03:24:09 +0000 (0:00:00.587) 0:00:07.010 *********** 2025-06-01 03:24:09.948308 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:09.948414 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:09.949139 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:09.950179 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:09.950639 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:09.951272 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:09.951856 | orchestrator | 2025-06-01 03:24:09.952943 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-01 03:24:09.953100 | orchestrator | Sunday 01 June 2025 03:24:09 +0000 (0:00:00.814) 0:00:07.824 *********** 2025-06-01 03:24:11.177824 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-01 03:24:11.177929 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-01 03:24:11.177946 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-01 03:24:11.178570 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-01 03:24:11.179929 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-01 03:24:11.180629 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-01 03:24:11.181111 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-01 03:24:11.181817 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-01 03:24:11.182481 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-01 03:24:11.182889 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-01 03:24:11.183398 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-01 03:24:11.183901 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-01 03:24:11.184644 | orchestrator | 2025-06-01 03:24:11.185831 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-01 03:24:11.186556 | orchestrator | Sunday 01 June 2025 03:24:11 +0000 (0:00:01.226) 0:00:09.050 *********** 2025-06-01 03:24:12.462774 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:12.464586 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:12.465149 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:12.466103 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:12.467248 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:12.468749 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:12.469344 | orchestrator | 2025-06-01 03:24:12.470276 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-01 03:24:12.471010 | orchestrator | Sunday 01 June 2025 03:24:12 +0000 (0:00:01.287) 0:00:10.338 *********** 2025-06-01 03:24:13.611710 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-01 03:24:13.611828 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-01 03:24:13.612456 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-01 03:24:13.713122 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:24:13.713222 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:24:13.713272 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:24:13.713286 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:24:13.713297 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:24:13.713308 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 03:24:13.713380 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-01 03:24:13.713795 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-01 03:24:13.714312 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-01 03:24:13.714912 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-01 03:24:13.716032 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-01 03:24:13.716374 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-01 03:24:13.717066 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:24:13.717867 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:24:13.718094 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:24:13.718172 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:24:13.719985 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:24:13.720356 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-01 03:24:13.720854 | orchestrator | 2025-06-01 03:24:13.720972 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-01 03:24:13.721393 | orchestrator | Sunday 01 June 2025 03:24:13 +0000 (0:00:01.246) 0:00:11.585 *********** 2025-06-01 03:24:14.295854 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:14.296190 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:14.297062 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:14.299993 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:14.300045 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:14.300057 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:14.300070 | orchestrator | 2025-06-01 03:24:14.300289 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-01 03:24:14.301021 | orchestrator | Sunday 01 June 2025 03:24:14 +0000 (0:00:00.586) 0:00:12.172 *********** 2025-06-01 03:24:14.373231 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:24:14.399259 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:24:14.438168 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:24:14.492405 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:14.492593 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:14.493957 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:14.494740 | orchestrator | 2025-06-01 03:24:14.496270 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-01 03:24:14.496962 | orchestrator | Sunday 01 June 2025 03:24:14 +0000 (0:00:00.196) 0:00:12.369 *********** 2025-06-01 03:24:15.227637 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 03:24:15.228066 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-01 03:24:15.229120 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:15.230012 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:15.230776 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-01 03:24:15.231678 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:15.232900 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 03:24:15.232926 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:15.233737 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 03:24:15.234288 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:15.234808 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 03:24:15.235570 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:15.236034 | orchestrator | 2025-06-01 03:24:15.236637 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-01 03:24:15.236984 | orchestrator | Sunday 01 June 2025 03:24:15 +0000 (0:00:00.736) 0:00:13.105 *********** 2025-06-01 03:24:15.287916 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:24:15.309674 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:24:15.359166 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:24:15.392081 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:15.393090 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:15.393271 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:15.394096 | orchestrator | 2025-06-01 03:24:15.395125 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-01 03:24:15.395865 | orchestrator | Sunday 01 June 2025 03:24:15 +0000 (0:00:00.163) 0:00:13.269 *********** 2025-06-01 03:24:15.437904 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:24:15.460230 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:24:15.481807 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:24:15.503065 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:15.531936 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:15.532215 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:15.532939 | orchestrator | 2025-06-01 03:24:15.533692 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-01 03:24:15.534368 | orchestrator | Sunday 01 June 2025 03:24:15 +0000 (0:00:00.141) 0:00:13.410 *********** 2025-06-01 03:24:15.608707 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:24:15.628339 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:24:15.648476 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:24:15.676579 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:15.676750 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:15.677344 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:15.677658 | orchestrator | 2025-06-01 03:24:15.677930 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-01 03:24:15.678610 | orchestrator | Sunday 01 June 2025 03:24:15 +0000 (0:00:00.144) 0:00:13.554 *********** 2025-06-01 03:24:16.349896 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:16.350450 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:16.351348 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:16.352022 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:16.353119 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:16.353825 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:16.354380 | orchestrator | 2025-06-01 03:24:16.355592 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-01 03:24:16.355745 | orchestrator | Sunday 01 June 2025 03:24:16 +0000 (0:00:00.670) 0:00:14.225 *********** 2025-06-01 03:24:16.440936 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:24:16.466163 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:24:16.569862 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:24:16.570201 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:16.570829 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:16.571483 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:16.572117 | orchestrator | 2025-06-01 03:24:16.573176 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:24:16.573465 | orchestrator | 2025-06-01 03:24:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:24:16.573948 | orchestrator | 2025-06-01 03:24:16 | INFO  | Please wait and do not abort execution. 2025-06-01 03:24:16.574851 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:24:16.575541 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:24:16.576366 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:24:16.577291 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:24:16.577567 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:24:16.578305 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:24:16.579012 | orchestrator | 2025-06-01 03:24:16.579890 | orchestrator | 2025-06-01 03:24:16.580461 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:24:16.580919 | orchestrator | Sunday 01 June 2025 03:24:16 +0000 (0:00:00.221) 0:00:14.447 *********** 2025-06-01 03:24:16.581402 | orchestrator | =============================================================================== 2025-06-01 03:24:16.581896 | orchestrator | Gathering Facts --------------------------------------------------------- 4.23s 2025-06-01 03:24:16.582404 | orchestrator | Do not require tty for all users ---------------------------------------- 1.73s 2025-06-01 03:24:16.582871 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-06-01 03:24:16.583494 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2025-06-01 03:24:16.583965 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.23s 2025-06-01 03:24:16.584459 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-06-01 03:24:16.584946 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-06-01 03:24:16.585449 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-06-01 03:24:16.585956 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-06-01 03:24:16.587213 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-01 03:24:16.587307 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-06-01 03:24:16.587657 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-06-01 03:24:16.588081 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-06-01 03:24:16.588559 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-06-01 03:24:16.588987 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-01 03:24:16.589412 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-06-01 03:24:16.589792 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-06-01 03:24:17.070179 | orchestrator | + osism apply --environment custom facts 2025-06-01 03:24:18.674647 | orchestrator | 2025-06-01 03:24:18 | INFO  | Trying to run play facts in environment custom 2025-06-01 03:24:18.676398 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:24:18.676431 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:24:18.676444 | orchestrator | Registering Redlock._release_script 2025-06-01 03:24:18.732356 | orchestrator | 2025-06-01 03:24:18 | INFO  | Task 6df9506b-0837-4d08-9eb4-961fc6277cfe (facts) was prepared for execution. 2025-06-01 03:24:18.732463 | orchestrator | 2025-06-01 03:24:18 | INFO  | It takes a moment until task 6df9506b-0837-4d08-9eb4-961fc6277cfe (facts) has been started and output is visible here. 2025-06-01 03:24:22.524230 | orchestrator | 2025-06-01 03:24:22.526397 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-01 03:24:22.526469 | orchestrator | 2025-06-01 03:24:22.526556 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 03:24:22.527629 | orchestrator | Sunday 01 June 2025 03:24:22 +0000 (0:00:00.084) 0:00:00.084 *********** 2025-06-01 03:24:23.916950 | orchestrator | ok: [testbed-manager] 2025-06-01 03:24:23.919670 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:23.919706 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:23.919719 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:23.920386 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:23.922233 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:23.922995 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:23.924129 | orchestrator | 2025-06-01 03:24:23.924957 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-01 03:24:23.925963 | orchestrator | Sunday 01 June 2025 03:24:23 +0000 (0:00:01.393) 0:00:01.477 *********** 2025-06-01 03:24:25.070250 | orchestrator | ok: [testbed-manager] 2025-06-01 03:24:25.071602 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:25.071638 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:24:25.073128 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:24:25.073606 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:25.074590 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:25.075931 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:24:25.075956 | orchestrator | 2025-06-01 03:24:25.076611 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-01 03:24:25.077247 | orchestrator | 2025-06-01 03:24:25.077937 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 03:24:25.078586 | orchestrator | Sunday 01 June 2025 03:24:25 +0000 (0:00:01.156) 0:00:02.633 *********** 2025-06-01 03:24:25.170950 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:25.171235 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:25.171759 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:25.172094 | orchestrator | 2025-06-01 03:24:25.172577 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 03:24:25.172688 | orchestrator | Sunday 01 June 2025 03:24:25 +0000 (0:00:00.101) 0:00:02.734 *********** 2025-06-01 03:24:25.371503 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:25.371638 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:25.371653 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:25.371664 | orchestrator | 2025-06-01 03:24:25.371676 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 03:24:25.371689 | orchestrator | Sunday 01 June 2025 03:24:25 +0000 (0:00:00.198) 0:00:02.933 *********** 2025-06-01 03:24:25.574289 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:25.574391 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:25.574406 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:25.574417 | orchestrator | 2025-06-01 03:24:25.574430 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 03:24:25.574442 | orchestrator | Sunday 01 June 2025 03:24:25 +0000 (0:00:00.199) 0:00:03.133 *********** 2025-06-01 03:24:25.702212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:24:25.702406 | orchestrator | 2025-06-01 03:24:25.702427 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 03:24:25.702813 | orchestrator | Sunday 01 June 2025 03:24:25 +0000 (0:00:00.127) 0:00:03.260 *********** 2025-06-01 03:24:26.129742 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:26.132399 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:26.132457 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:26.133117 | orchestrator | 2025-06-01 03:24:26.133726 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 03:24:26.135068 | orchestrator | Sunday 01 June 2025 03:24:26 +0000 (0:00:00.433) 0:00:03.693 *********** 2025-06-01 03:24:26.234296 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:26.234793 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:26.235428 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:26.236060 | orchestrator | 2025-06-01 03:24:26.237223 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 03:24:26.237421 | orchestrator | Sunday 01 June 2025 03:24:26 +0000 (0:00:00.105) 0:00:03.798 *********** 2025-06-01 03:24:27.288063 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:27.288370 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:27.289269 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:27.290129 | orchestrator | 2025-06-01 03:24:27.290933 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 03:24:27.291453 | orchestrator | Sunday 01 June 2025 03:24:27 +0000 (0:00:01.051) 0:00:04.850 *********** 2025-06-01 03:24:27.745827 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:27.747165 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:27.748576 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:27.749547 | orchestrator | 2025-06-01 03:24:27.750234 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 03:24:27.750846 | orchestrator | Sunday 01 June 2025 03:24:27 +0000 (0:00:00.457) 0:00:05.307 *********** 2025-06-01 03:24:28.888795 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:28.889794 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:28.890948 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:28.891824 | orchestrator | 2025-06-01 03:24:28.892551 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 03:24:28.892873 | orchestrator | Sunday 01 June 2025 03:24:28 +0000 (0:00:01.143) 0:00:06.451 *********** 2025-06-01 03:24:42.421717 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:42.421840 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:42.421858 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:42.421870 | orchestrator | 2025-06-01 03:24:42.421883 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-01 03:24:42.421896 | orchestrator | Sunday 01 June 2025 03:24:42 +0000 (0:00:13.525) 0:00:19.977 *********** 2025-06-01 03:24:42.538753 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:24:42.538912 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:24:42.539395 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:24:42.540259 | orchestrator | 2025-06-01 03:24:42.540716 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-01 03:24:42.541151 | orchestrator | Sunday 01 June 2025 03:24:42 +0000 (0:00:00.125) 0:00:20.102 *********** 2025-06-01 03:24:49.771595 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:24:49.771708 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:24:49.772053 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:24:49.774835 | orchestrator | 2025-06-01 03:24:49.775186 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 03:24:49.775994 | orchestrator | Sunday 01 June 2025 03:24:49 +0000 (0:00:07.230) 0:00:27.332 *********** 2025-06-01 03:24:50.190391 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:50.190614 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:50.192597 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:50.192699 | orchestrator | 2025-06-01 03:24:50.193229 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-01 03:24:50.194563 | orchestrator | Sunday 01 June 2025 03:24:50 +0000 (0:00:00.421) 0:00:27.754 *********** 2025-06-01 03:24:53.661748 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-01 03:24:53.663227 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-01 03:24:53.664748 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-01 03:24:53.666487 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-01 03:24:53.667404 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-01 03:24:53.669490 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-01 03:24:53.669543 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-01 03:24:53.669840 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-01 03:24:53.670975 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-01 03:24:53.671726 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-01 03:24:53.672090 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-01 03:24:53.673162 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-01 03:24:53.673563 | orchestrator | 2025-06-01 03:24:53.674181 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 03:24:53.674972 | orchestrator | Sunday 01 June 2025 03:24:53 +0000 (0:00:03.469) 0:00:31.224 *********** 2025-06-01 03:24:54.833809 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:54.836257 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:54.836374 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:54.836399 | orchestrator | 2025-06-01 03:24:54.836599 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 03:24:54.837159 | orchestrator | 2025-06-01 03:24:54.838761 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 03:24:54.841738 | orchestrator | Sunday 01 June 2025 03:24:54 +0000 (0:00:01.170) 0:00:32.394 *********** 2025-06-01 03:24:58.608122 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:24:58.608344 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:24:58.608945 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:24:58.609403 | orchestrator | ok: [testbed-manager] 2025-06-01 03:24:58.610383 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:24:58.611620 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:24:58.612371 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:24:58.613375 | orchestrator | 2025-06-01 03:24:58.613762 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:24:58.614260 | orchestrator | 2025-06-01 03:24:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:24:58.614938 | orchestrator | 2025-06-01 03:24:58 | INFO  | Please wait and do not abort execution. 2025-06-01 03:24:58.615655 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:24:58.616069 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:24:58.616526 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:24:58.617086 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:24:58.617531 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:24:58.618100 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:24:58.618843 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:24:58.619347 | orchestrator | 2025-06-01 03:24:58.619789 | orchestrator | 2025-06-01 03:24:58.620218 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:24:58.620572 | orchestrator | Sunday 01 June 2025 03:24:58 +0000 (0:00:03.777) 0:00:36.171 *********** 2025-06-01 03:24:58.621214 | orchestrator | =============================================================================== 2025-06-01 03:24:58.621342 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.53s 2025-06-01 03:24:58.621844 | orchestrator | Install required packages (Debian) -------------------------------------- 7.23s 2025-06-01 03:24:58.622130 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.78s 2025-06-01 03:24:58.622643 | orchestrator | Copy fact files --------------------------------------------------------- 3.47s 2025-06-01 03:24:58.623012 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2025-06-01 03:24:58.623343 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.17s 2025-06-01 03:24:58.623666 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2025-06-01 03:24:58.624142 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.14s 2025-06-01 03:24:58.624477 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-06-01 03:24:58.624984 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-06-01 03:24:58.625206 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-01 03:24:58.625707 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-06-01 03:24:58.625992 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-06-01 03:24:58.626455 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-01 03:24:58.626809 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-06-01 03:24:58.627160 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-06-01 03:24:58.627545 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-01 03:24:58.627827 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-06-01 03:24:59.037841 | orchestrator | + osism apply bootstrap 2025-06-01 03:25:00.647856 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:25:00.647963 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:25:00.647979 | orchestrator | Registering Redlock._release_script 2025-06-01 03:25:00.702564 | orchestrator | 2025-06-01 03:25:00 | INFO  | Task 97175269-023f-485c-b4cd-8c7764bf2cd3 (bootstrap) was prepared for execution. 2025-06-01 03:25:00.702652 | orchestrator | 2025-06-01 03:25:00 | INFO  | It takes a moment until task 97175269-023f-485c-b4cd-8c7764bf2cd3 (bootstrap) has been started and output is visible here. 2025-06-01 03:25:04.683340 | orchestrator | 2025-06-01 03:25:04.684034 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-01 03:25:04.684878 | orchestrator | 2025-06-01 03:25:04.686485 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-01 03:25:04.686894 | orchestrator | Sunday 01 June 2025 03:25:04 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-01 03:25:04.754663 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:04.780095 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:04.805642 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:04.833066 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:04.908364 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:04.908450 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:04.908775 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:04.909565 | orchestrator | 2025-06-01 03:25:04.909955 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 03:25:04.910850 | orchestrator | 2025-06-01 03:25:04.910866 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 03:25:04.912740 | orchestrator | Sunday 01 June 2025 03:25:04 +0000 (0:00:00.228) 0:00:00.386 *********** 2025-06-01 03:25:08.569340 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:08.570267 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:08.571294 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:08.572105 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:08.572740 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:08.573611 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:08.574230 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:08.575684 | orchestrator | 2025-06-01 03:25:08.576777 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-01 03:25:08.577415 | orchestrator | 2025-06-01 03:25:08.578571 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 03:25:08.579325 | orchestrator | Sunday 01 June 2025 03:25:08 +0000 (0:00:03.660) 0:00:04.046 *********** 2025-06-01 03:25:08.657843 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-01 03:25:08.657998 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-01 03:25:08.692464 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-01 03:25:08.692619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-01 03:25:08.692708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-01 03:25:08.741068 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 03:25:08.742128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 03:25:08.742398 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-01 03:25:08.742673 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-01 03:25:08.746323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 03:25:08.746415 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-01 03:25:08.784292 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:08.784447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 03:25:08.785060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-01 03:25:08.785660 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-01 03:25:08.786499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 03:25:08.787107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-01 03:25:08.787598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 03:25:09.001244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-01 03:25:09.002254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-01 03:25:09.003346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 03:25:09.004965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-01 03:25:09.006631 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:09.007687 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 03:25:09.008745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-01 03:25:09.010657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-01 03:25:09.012616 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-01 03:25:09.012642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 03:25:09.013620 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 03:25:09.014464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-01 03:25:09.015725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 03:25:09.016077 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:09.017112 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 03:25:09.017888 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-01 03:25:09.018886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 03:25:09.019737 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-01 03:25:09.020622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 03:25:09.021195 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:09.022093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-01 03:25:09.022933 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-01 03:25:09.023765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 03:25:09.024613 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-01 03:25:09.025399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-01 03:25:09.026066 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 03:25:09.026502 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-01 03:25:09.027347 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-01 03:25:09.027778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 03:25:09.028072 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-01 03:25:09.028575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-01 03:25:09.028990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 03:25:09.029959 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-01 03:25:09.031030 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:09.034486 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-01 03:25:09.034560 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:09.034574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 03:25:09.034585 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:09.034597 | orchestrator | 2025-06-01 03:25:09.034997 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-01 03:25:09.035897 | orchestrator | 2025-06-01 03:25:09.036643 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-01 03:25:09.037141 | orchestrator | Sunday 01 June 2025 03:25:08 +0000 (0:00:00.431) 0:00:04.477 *********** 2025-06-01 03:25:10.204241 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:10.205246 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:10.205611 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:10.206075 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:10.207137 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:10.207753 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:10.208050 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:10.209749 | orchestrator | 2025-06-01 03:25:10.209847 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-01 03:25:10.209888 | orchestrator | Sunday 01 June 2025 03:25:10 +0000 (0:00:01.203) 0:00:05.681 *********** 2025-06-01 03:25:11.344629 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:11.344812 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:11.346005 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:11.346729 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:11.347906 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:11.348304 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:11.349189 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:11.350160 | orchestrator | 2025-06-01 03:25:11.351843 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-01 03:25:11.352603 | orchestrator | Sunday 01 June 2025 03:25:11 +0000 (0:00:01.139) 0:00:06.820 *********** 2025-06-01 03:25:11.584800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:11.585656 | orchestrator | 2025-06-01 03:25:11.587448 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-01 03:25:11.588028 | orchestrator | Sunday 01 June 2025 03:25:11 +0000 (0:00:00.241) 0:00:07.061 *********** 2025-06-01 03:25:13.516377 | orchestrator | changed: [testbed-manager] 2025-06-01 03:25:13.519128 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:13.520256 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:13.520794 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:13.521757 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:13.524273 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:13.524872 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:13.528736 | orchestrator | 2025-06-01 03:25:13.529048 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-01 03:25:13.529785 | orchestrator | Sunday 01 June 2025 03:25:13 +0000 (0:00:01.929) 0:00:08.991 *********** 2025-06-01 03:25:13.580728 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:13.746865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:13.746974 | orchestrator | 2025-06-01 03:25:13.747462 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-01 03:25:13.747884 | orchestrator | Sunday 01 June 2025 03:25:13 +0000 (0:00:00.232) 0:00:09.223 *********** 2025-06-01 03:25:14.777757 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:14.777863 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:14.778233 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:14.779303 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:14.779884 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:14.780769 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:14.781832 | orchestrator | 2025-06-01 03:25:14.781922 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-01 03:25:14.782593 | orchestrator | Sunday 01 June 2025 03:25:14 +0000 (0:00:01.029) 0:00:10.252 *********** 2025-06-01 03:25:14.840613 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:15.355099 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:15.355692 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:15.356777 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:15.357577 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:15.358361 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:15.359296 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:15.360127 | orchestrator | 2025-06-01 03:25:15.361131 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-01 03:25:15.361737 | orchestrator | Sunday 01 June 2025 03:25:15 +0000 (0:00:00.578) 0:00:10.831 *********** 2025-06-01 03:25:15.442422 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:15.466998 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:15.486290 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:15.758138 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:15.760487 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:15.760841 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:15.761850 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:15.762958 | orchestrator | 2025-06-01 03:25:15.763578 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-01 03:25:15.764374 | orchestrator | Sunday 01 June 2025 03:25:15 +0000 (0:00:00.401) 0:00:11.233 *********** 2025-06-01 03:25:15.832809 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:15.858786 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:15.888448 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:15.917665 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:15.969719 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:15.970150 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:15.971399 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:15.972185 | orchestrator | 2025-06-01 03:25:15.973603 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-01 03:25:15.974376 | orchestrator | Sunday 01 June 2025 03:25:15 +0000 (0:00:00.213) 0:00:11.447 *********** 2025-06-01 03:25:16.234961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:16.235720 | orchestrator | 2025-06-01 03:25:16.236874 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-01 03:25:16.238083 | orchestrator | Sunday 01 June 2025 03:25:16 +0000 (0:00:00.264) 0:00:11.711 *********** 2025-06-01 03:25:16.517254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:16.517878 | orchestrator | 2025-06-01 03:25:16.518967 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-01 03:25:16.520774 | orchestrator | Sunday 01 June 2025 03:25:16 +0000 (0:00:00.282) 0:00:11.994 *********** 2025-06-01 03:25:17.721632 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:17.721736 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:17.722115 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:17.723093 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:17.723771 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:17.725540 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:17.726096 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:17.726697 | orchestrator | 2025-06-01 03:25:17.727638 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-01 03:25:17.728046 | orchestrator | Sunday 01 June 2025 03:25:17 +0000 (0:00:01.202) 0:00:13.196 *********** 2025-06-01 03:25:17.801041 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:17.823379 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:17.847129 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:17.871903 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:17.919149 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:17.919844 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:17.920652 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:17.921364 | orchestrator | 2025-06-01 03:25:17.921710 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-01 03:25:17.922434 | orchestrator | Sunday 01 June 2025 03:25:17 +0000 (0:00:00.200) 0:00:13.396 *********** 2025-06-01 03:25:18.508238 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:18.509073 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:18.510268 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:18.511381 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:18.512612 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:18.513612 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:18.514717 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:18.516284 | orchestrator | 2025-06-01 03:25:18.517621 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-01 03:25:18.518610 | orchestrator | Sunday 01 June 2025 03:25:18 +0000 (0:00:00.587) 0:00:13.984 *********** 2025-06-01 03:25:18.621645 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:18.648423 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:18.673574 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:18.758069 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:18.759208 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:18.759433 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:18.760356 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:18.760961 | orchestrator | 2025-06-01 03:25:18.761981 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-01 03:25:18.762958 | orchestrator | Sunday 01 June 2025 03:25:18 +0000 (0:00:00.249) 0:00:14.234 *********** 2025-06-01 03:25:19.272057 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:19.272472 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:19.274126 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:19.274536 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:19.275221 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:19.275637 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:19.276320 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:19.276801 | orchestrator | 2025-06-01 03:25:19.277299 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-01 03:25:19.278069 | orchestrator | Sunday 01 June 2025 03:25:19 +0000 (0:00:00.513) 0:00:14.748 *********** 2025-06-01 03:25:20.328615 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:20.329711 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:20.331283 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:20.332262 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:20.333353 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:20.334307 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:20.335642 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:20.336216 | orchestrator | 2025-06-01 03:25:20.337177 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-01 03:25:20.337714 | orchestrator | Sunday 01 June 2025 03:25:20 +0000 (0:00:01.053) 0:00:15.801 *********** 2025-06-01 03:25:21.425013 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:21.425124 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:21.425646 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:21.426922 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:21.427626 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:21.428123 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:21.429339 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:21.429876 | orchestrator | 2025-06-01 03:25:21.430266 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-01 03:25:21.430971 | orchestrator | Sunday 01 June 2025 03:25:21 +0000 (0:00:01.099) 0:00:16.901 *********** 2025-06-01 03:25:21.776331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:21.777011 | orchestrator | 2025-06-01 03:25:21.778357 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-01 03:25:21.779574 | orchestrator | Sunday 01 June 2025 03:25:21 +0000 (0:00:00.352) 0:00:17.253 *********** 2025-06-01 03:25:21.851556 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:22.983385 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:22.984191 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:22.984312 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:22.985924 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:22.987152 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:22.988073 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:22.988886 | orchestrator | 2025-06-01 03:25:22.989427 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 03:25:22.989919 | orchestrator | Sunday 01 June 2025 03:25:22 +0000 (0:00:01.205) 0:00:18.458 *********** 2025-06-01 03:25:23.058459 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:23.081320 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:23.108191 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:23.128143 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:23.177728 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:23.178382 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:23.179100 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:23.179895 | orchestrator | 2025-06-01 03:25:23.180468 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 03:25:23.181076 | orchestrator | Sunday 01 June 2025 03:25:23 +0000 (0:00:00.196) 0:00:18.655 *********** 2025-06-01 03:25:23.244467 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:23.268365 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:23.291809 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:23.318349 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:23.378710 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:23.378785 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:23.378879 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:23.379027 | orchestrator | 2025-06-01 03:25:23.379350 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 03:25:23.381176 | orchestrator | Sunday 01 June 2025 03:25:23 +0000 (0:00:00.201) 0:00:18.856 *********** 2025-06-01 03:25:23.450159 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:23.496615 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:23.522096 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:23.590749 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:23.590821 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:23.590917 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:23.591730 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:23.591760 | orchestrator | 2025-06-01 03:25:23.592578 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 03:25:23.592663 | orchestrator | Sunday 01 June 2025 03:25:23 +0000 (0:00:00.208) 0:00:19.065 *********** 2025-06-01 03:25:23.842350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:23.843317 | orchestrator | 2025-06-01 03:25:23.843721 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 03:25:23.844889 | orchestrator | Sunday 01 June 2025 03:25:23 +0000 (0:00:00.253) 0:00:19.319 *********** 2025-06-01 03:25:24.365648 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:24.366629 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:24.367218 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:24.367969 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:24.369152 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:24.369678 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:24.370083 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:24.371994 | orchestrator | 2025-06-01 03:25:24.373371 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 03:25:24.373411 | orchestrator | Sunday 01 June 2025 03:25:24 +0000 (0:00:00.521) 0:00:19.841 *********** 2025-06-01 03:25:24.463706 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:24.492626 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:24.520591 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:24.591920 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:24.592660 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:24.594211 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:24.595579 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:24.596094 | orchestrator | 2025-06-01 03:25:24.597225 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 03:25:24.597875 | orchestrator | Sunday 01 June 2025 03:25:24 +0000 (0:00:00.228) 0:00:20.069 *********** 2025-06-01 03:25:25.649722 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:25.649828 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:25.649843 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:25.650737 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:25.651947 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:25.652605 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:25.653245 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:25.654140 | orchestrator | 2025-06-01 03:25:25.654678 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 03:25:25.655439 | orchestrator | Sunday 01 June 2025 03:25:25 +0000 (0:00:01.053) 0:00:21.123 *********** 2025-06-01 03:25:26.168311 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:26.168468 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:26.170979 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:26.172581 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:26.173532 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:26.173669 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:26.174977 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:26.175238 | orchestrator | 2025-06-01 03:25:26.175986 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 03:25:26.178898 | orchestrator | Sunday 01 June 2025 03:25:26 +0000 (0:00:00.521) 0:00:21.644 *********** 2025-06-01 03:25:27.333968 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:27.335495 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:27.335736 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:27.337670 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:27.338947 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:27.339994 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:27.341090 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:27.341722 | orchestrator | 2025-06-01 03:25:27.342732 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 03:25:27.343668 | orchestrator | Sunday 01 June 2025 03:25:27 +0000 (0:00:01.165) 0:00:22.809 *********** 2025-06-01 03:25:40.738548 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:40.738674 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:40.738691 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:40.738704 | orchestrator | changed: [testbed-manager] 2025-06-01 03:25:40.739958 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:40.740581 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:40.742121 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:40.742461 | orchestrator | 2025-06-01 03:25:40.743159 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-01 03:25:40.743657 | orchestrator | Sunday 01 June 2025 03:25:40 +0000 (0:00:13.400) 0:00:36.209 *********** 2025-06-01 03:25:40.806331 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:40.830246 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:40.860156 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:40.880122 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:40.933240 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:40.934123 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:40.935060 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:40.935993 | orchestrator | 2025-06-01 03:25:40.936838 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-01 03:25:40.937772 | orchestrator | Sunday 01 June 2025 03:25:40 +0000 (0:00:00.200) 0:00:36.410 *********** 2025-06-01 03:25:41.014856 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:41.046278 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:41.077176 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:41.097393 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:41.150232 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:41.151093 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:41.151956 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:41.152759 | orchestrator | 2025-06-01 03:25:41.153430 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-01 03:25:41.154257 | orchestrator | Sunday 01 June 2025 03:25:41 +0000 (0:00:00.218) 0:00:36.628 *********** 2025-06-01 03:25:41.220365 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:41.244750 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:41.268426 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:41.298064 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:41.351936 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:41.352725 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:41.353471 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:41.354911 | orchestrator | 2025-06-01 03:25:41.355584 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-01 03:25:41.356319 | orchestrator | Sunday 01 June 2025 03:25:41 +0000 (0:00:00.201) 0:00:36.829 *********** 2025-06-01 03:25:41.629447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:41.630369 | orchestrator | 2025-06-01 03:25:41.631311 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-01 03:25:41.632319 | orchestrator | Sunday 01 June 2025 03:25:41 +0000 (0:00:00.276) 0:00:37.106 *********** 2025-06-01 03:25:43.187381 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:43.187807 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:43.188302 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:43.190070 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:43.190622 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:43.191396 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:43.191826 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:43.192471 | orchestrator | 2025-06-01 03:25:43.193451 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-01 03:25:43.194291 | orchestrator | Sunday 01 June 2025 03:25:43 +0000 (0:00:01.556) 0:00:38.662 *********** 2025-06-01 03:25:44.239282 | orchestrator | changed: [testbed-manager] 2025-06-01 03:25:44.239725 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:44.241926 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:44.242463 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:44.243601 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:44.244557 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:44.245147 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:44.245960 | orchestrator | 2025-06-01 03:25:44.246767 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-01 03:25:44.247598 | orchestrator | Sunday 01 June 2025 03:25:44 +0000 (0:00:01.052) 0:00:39.714 *********** 2025-06-01 03:25:45.049552 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:45.050140 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:45.050803 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:45.051296 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:45.055190 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:45.055215 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:45.055227 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:45.055383 | orchestrator | 2025-06-01 03:25:45.056019 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-01 03:25:45.056696 | orchestrator | Sunday 01 June 2025 03:25:45 +0000 (0:00:00.811) 0:00:40.526 *********** 2025-06-01 03:25:45.345490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:25:45.345739 | orchestrator | 2025-06-01 03:25:45.346367 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-01 03:25:45.348709 | orchestrator | Sunday 01 June 2025 03:25:45 +0000 (0:00:00.294) 0:00:40.820 *********** 2025-06-01 03:25:46.367273 | orchestrator | changed: [testbed-manager] 2025-06-01 03:25:46.368417 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:46.368916 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:46.370132 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:46.371040 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:46.372221 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:46.373847 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:46.374829 | orchestrator | 2025-06-01 03:25:46.375808 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-01 03:25:46.376986 | orchestrator | Sunday 01 June 2025 03:25:46 +0000 (0:00:01.015) 0:00:41.836 *********** 2025-06-01 03:25:46.458774 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:25:46.485926 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:25:46.513570 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:25:46.658594 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:25:46.660137 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:25:46.660936 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:25:46.662724 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:25:46.662909 | orchestrator | 2025-06-01 03:25:46.664451 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-01 03:25:46.665174 | orchestrator | Sunday 01 June 2025 03:25:46 +0000 (0:00:00.297) 0:00:42.134 *********** 2025-06-01 03:25:58.680190 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:25:58.680285 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:25:58.680418 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:25:58.681503 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:25:58.681822 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:25:58.682839 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:25:58.683778 | orchestrator | changed: [testbed-manager] 2025-06-01 03:25:58.684568 | orchestrator | 2025-06-01 03:25:58.685286 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-01 03:25:58.685462 | orchestrator | Sunday 01 June 2025 03:25:58 +0000 (0:00:12.017) 0:00:54.151 *********** 2025-06-01 03:25:59.669434 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:25:59.672102 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:25:59.672136 | orchestrator | ok: [testbed-manager] 2025-06-01 03:25:59.672424 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:25:59.673480 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:25:59.674168 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:25:59.675273 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:25:59.675643 | orchestrator | 2025-06-01 03:25:59.676146 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-01 03:25:59.677003 | orchestrator | Sunday 01 June 2025 03:25:59 +0000 (0:00:00.992) 0:00:55.144 *********** 2025-06-01 03:26:00.557627 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:00.557738 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:00.562633 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:00.562645 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:00.567805 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:00.567847 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:00.567858 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:00.568669 | orchestrator | 2025-06-01 03:26:00.569215 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-01 03:26:00.569869 | orchestrator | Sunday 01 June 2025 03:26:00 +0000 (0:00:00.887) 0:00:56.031 *********** 2025-06-01 03:26:00.635826 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:00.657382 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:00.694336 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:00.722451 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:00.786840 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:00.788009 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:00.789250 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:00.790559 | orchestrator | 2025-06-01 03:26:00.791723 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-01 03:26:00.792353 | orchestrator | Sunday 01 June 2025 03:26:00 +0000 (0:00:00.232) 0:00:56.264 *********** 2025-06-01 03:26:00.861844 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:00.886139 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:00.914834 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:00.944350 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:01.002011 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:01.003316 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:01.004486 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:01.005612 | orchestrator | 2025-06-01 03:26:01.006714 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-01 03:26:01.007172 | orchestrator | Sunday 01 June 2025 03:26:00 +0000 (0:00:00.214) 0:00:56.479 *********** 2025-06-01 03:26:01.307994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:26:01.308440 | orchestrator | 2025-06-01 03:26:01.309569 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-01 03:26:01.310470 | orchestrator | Sunday 01 June 2025 03:26:01 +0000 (0:00:00.304) 0:00:56.784 *********** 2025-06-01 03:26:02.852686 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:02.852772 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:02.852832 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:02.855907 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:02.855931 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:02.855940 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:02.855949 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:02.855958 | orchestrator | 2025-06-01 03:26:02.855969 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-01 03:26:02.855980 | orchestrator | Sunday 01 June 2025 03:26:02 +0000 (0:00:01.543) 0:00:58.327 *********** 2025-06-01 03:26:03.439092 | orchestrator | changed: [testbed-manager] 2025-06-01 03:26:03.440939 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:26:03.441833 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:26:03.443648 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:26:03.443788 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:26:03.444694 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:26:03.445240 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:26:03.449644 | orchestrator | 2025-06-01 03:26:03.450633 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-01 03:26:03.450662 | orchestrator | Sunday 01 June 2025 03:26:03 +0000 (0:00:00.586) 0:00:58.913 *********** 2025-06-01 03:26:03.546748 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:03.571125 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:03.600542 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:03.659722 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:03.660813 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:03.661232 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:03.661794 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:03.662434 | orchestrator | 2025-06-01 03:26:03.663289 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-01 03:26:03.664130 | orchestrator | Sunday 01 June 2025 03:26:03 +0000 (0:00:00.222) 0:00:59.136 *********** 2025-06-01 03:26:04.794228 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:04.794576 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:04.795728 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:04.798386 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:04.798479 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:04.798494 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:04.798601 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:04.799016 | orchestrator | 2025-06-01 03:26:04.799594 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-01 03:26:04.800322 | orchestrator | Sunday 01 June 2025 03:26:04 +0000 (0:00:01.130) 0:01:00.267 *********** 2025-06-01 03:26:06.398306 | orchestrator | changed: [testbed-manager] 2025-06-01 03:26:06.399673 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:26:06.401684 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:26:06.402572 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:26:06.403489 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:26:06.404181 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:26:06.405208 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:26:06.405533 | orchestrator | 2025-06-01 03:26:06.406664 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-01 03:26:06.408316 | orchestrator | Sunday 01 June 2025 03:26:06 +0000 (0:00:01.604) 0:01:01.872 *********** 2025-06-01 03:26:08.537387 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:08.538161 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:08.540932 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:08.542494 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:08.543835 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:08.545503 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:08.546499 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:08.547799 | orchestrator | 2025-06-01 03:26:08.549222 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-01 03:26:08.549609 | orchestrator | Sunday 01 June 2025 03:26:08 +0000 (0:00:02.140) 0:01:04.012 *********** 2025-06-01 03:26:43.331769 | orchestrator | ok: [testbed-manager] 2025-06-01 03:26:43.331930 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:26:43.332018 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:26:43.333304 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:26:43.335253 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:26:43.336497 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:26:43.339062 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:26:43.339677 | orchestrator | 2025-06-01 03:26:43.340900 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-01 03:26:43.341573 | orchestrator | Sunday 01 June 2025 03:26:43 +0000 (0:00:34.792) 0:01:38.804 *********** 2025-06-01 03:27:57.346446 | orchestrator | changed: [testbed-manager] 2025-06-01 03:27:57.346678 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:27:57.346707 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:27:57.347000 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:27:57.347642 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:27:57.348258 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:27:57.348627 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:27:57.349211 | orchestrator | 2025-06-01 03:27:57.349627 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-01 03:27:57.350438 | orchestrator | Sunday 01 June 2025 03:27:57 +0000 (0:01:14.013) 0:02:52.818 *********** 2025-06-01 03:27:58.860067 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:27:58.860177 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:27:58.860193 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:27:58.860204 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:27:58.861924 | orchestrator | ok: [testbed-manager] 2025-06-01 03:27:58.862232 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:27:58.863016 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:27:58.863919 | orchestrator | 2025-06-01 03:27:58.865017 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-01 03:27:58.865442 | orchestrator | Sunday 01 June 2025 03:27:58 +0000 (0:00:01.514) 0:02:54.332 *********** 2025-06-01 03:28:10.298620 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:10.298765 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:10.298783 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:10.298808 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:10.298820 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:10.298831 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:10.298843 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:10.298855 | orchestrator | 2025-06-01 03:28:10.298867 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-01 03:28:10.298880 | orchestrator | Sunday 01 June 2025 03:28:10 +0000 (0:00:11.436) 0:03:05.769 *********** 2025-06-01 03:28:10.679609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-01 03:28:10.680143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-01 03:28:10.684058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-01 03:28:10.684086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-01 03:28:10.684092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-01 03:28:10.684096 | orchestrator | 2025-06-01 03:28:10.684102 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-01 03:28:10.684791 | orchestrator | Sunday 01 June 2025 03:28:10 +0000 (0:00:00.387) 0:03:06.156 *********** 2025-06-01 03:28:10.738750 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 03:28:10.741825 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 03:28:10.761140 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:10.794960 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 03:28:10.794997 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:28:10.834239 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 03:28:10.836121 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:28:10.866357 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:28:11.339485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 03:28:11.339843 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 03:28:11.344024 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 03:28:11.344828 | orchestrator | 2025-06-01 03:28:11.345773 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-01 03:28:11.346789 | orchestrator | Sunday 01 June 2025 03:28:11 +0000 (0:00:00.657) 0:03:06.813 *********** 2025-06-01 03:28:11.410762 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 03:28:11.411746 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 03:28:11.412827 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 03:28:11.413950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 03:28:11.414921 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 03:28:11.415797 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 03:28:11.418849 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 03:28:11.419062 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 03:28:11.419221 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 03:28:11.419494 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 03:28:11.419764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 03:28:11.462330 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 03:28:11.462405 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 03:28:11.462651 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 03:28:11.463210 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 03:28:11.463973 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 03:28:11.464143 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 03:28:11.464627 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 03:28:11.466885 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 03:28:11.468999 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 03:28:11.504896 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 03:28:11.506219 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:11.507301 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 03:28:11.508434 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 03:28:11.509502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 03:28:11.510141 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 03:28:11.511303 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 03:28:11.511403 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 03:28:11.512019 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 03:28:11.513284 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 03:28:11.513312 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 03:28:11.514168 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 03:28:11.514636 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 03:28:11.514932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 03:28:11.565442 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 03:28:11.566745 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:28:11.567645 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 03:28:11.568670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 03:28:11.569690 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 03:28:11.570434 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 03:28:11.571188 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 03:28:11.571969 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 03:28:11.589742 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:28:15.895872 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:28:15.896506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 03:28:15.897616 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 03:28:15.901471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 03:28:15.902571 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 03:28:15.903488 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 03:28:15.904381 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 03:28:15.905124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 03:28:15.906122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 03:28:15.906887 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 03:28:15.907936 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 03:28:15.909015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 03:28:15.909533 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 03:28:15.911045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 03:28:15.911068 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 03:28:15.911080 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 03:28:15.911329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 03:28:15.911745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 03:28:15.912128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 03:28:15.912545 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 03:28:15.912944 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 03:28:15.913351 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 03:28:15.913840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 03:28:15.914188 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 03:28:15.914610 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 03:28:15.914957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 03:28:15.915471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 03:28:15.916190 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 03:28:15.916619 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 03:28:15.917445 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 03:28:15.918146 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 03:28:15.918425 | orchestrator | 2025-06-01 03:28:15.918733 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-01 03:28:15.918997 | orchestrator | Sunday 01 June 2025 03:28:15 +0000 (0:00:04.558) 0:03:11.371 *********** 2025-06-01 03:28:17.342308 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.342408 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.342941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.343374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.345816 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.346594 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.347630 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 03:28:17.348085 | orchestrator | 2025-06-01 03:28:17.348910 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-01 03:28:17.349292 | orchestrator | Sunday 01 June 2025 03:28:17 +0000 (0:00:01.446) 0:03:12.817 *********** 2025-06-01 03:28:17.397977 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 03:28:17.434982 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:17.512904 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 03:28:17.513018 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 03:28:17.831863 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:28:17.833379 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:28:17.834157 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 03:28:17.835078 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:28:17.836302 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 03:28:17.837350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 03:28:17.838152 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 03:28:17.839032 | orchestrator | 2025-06-01 03:28:17.839922 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-01 03:28:17.842884 | orchestrator | Sunday 01 June 2025 03:28:17 +0000 (0:00:00.489) 0:03:13.307 *********** 2025-06-01 03:28:17.891813 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 03:28:17.916131 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:17.996967 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 03:28:18.365680 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 03:28:18.366711 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:28:18.367546 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:28:18.368637 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 03:28:18.371183 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:28:18.371939 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 03:28:18.373020 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 03:28:18.373327 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 03:28:18.374220 | orchestrator | 2025-06-01 03:28:18.375166 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-01 03:28:18.375822 | orchestrator | Sunday 01 June 2025 03:28:18 +0000 (0:00:00.535) 0:03:13.842 *********** 2025-06-01 03:28:18.448907 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:18.474787 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:28:18.496755 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:28:18.529437 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:28:18.642262 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:28:18.643167 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:28:18.644161 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:28:18.645713 | orchestrator | 2025-06-01 03:28:18.646490 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-01 03:28:18.647676 | orchestrator | Sunday 01 June 2025 03:28:18 +0000 (0:00:00.276) 0:03:14.119 *********** 2025-06-01 03:28:24.239661 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:24.240078 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:24.240832 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:24.241562 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:24.243086 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:24.244180 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:24.244954 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:24.245798 | orchestrator | 2025-06-01 03:28:24.246295 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-01 03:28:24.246882 | orchestrator | Sunday 01 June 2025 03:28:24 +0000 (0:00:05.596) 0:03:19.716 *********** 2025-06-01 03:28:24.312485 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-01 03:28:24.362621 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:24.363347 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-01 03:28:24.364104 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-01 03:28:24.404979 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:28:24.444056 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:28:24.444106 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-01 03:28:24.489169 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-01 03:28:24.489329 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:28:24.489737 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-01 03:28:24.561595 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:28:24.563309 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:28:24.563968 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-01 03:28:24.565090 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:28:24.565370 | orchestrator | 2025-06-01 03:28:24.566109 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-01 03:28:24.566633 | orchestrator | Sunday 01 June 2025 03:28:24 +0000 (0:00:00.322) 0:03:20.038 *********** 2025-06-01 03:28:25.539098 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-01 03:28:25.539980 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-01 03:28:25.540818 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-01 03:28:25.542447 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-01 03:28:25.543575 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-01 03:28:25.545211 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-01 03:28:25.546390 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-01 03:28:25.547724 | orchestrator | 2025-06-01 03:28:25.549399 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-01 03:28:25.550410 | orchestrator | Sunday 01 June 2025 03:28:25 +0000 (0:00:00.975) 0:03:21.014 *********** 2025-06-01 03:28:26.000323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:28:26.001713 | orchestrator | 2025-06-01 03:28:26.003121 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-01 03:28:26.004003 | orchestrator | Sunday 01 June 2025 03:28:25 +0000 (0:00:00.460) 0:03:21.474 *********** 2025-06-01 03:28:27.096045 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:27.098198 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:27.098234 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:27.099271 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:27.100063 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:27.101034 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:27.102078 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:27.103280 | orchestrator | 2025-06-01 03:28:27.104423 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-01 03:28:27.105081 | orchestrator | Sunday 01 June 2025 03:28:27 +0000 (0:00:01.097) 0:03:22.572 *********** 2025-06-01 03:28:27.682070 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:27.683223 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:27.684178 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:27.685922 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:27.686494 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:27.687298 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:27.688395 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:27.688797 | orchestrator | 2025-06-01 03:28:27.690145 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-01 03:28:27.690444 | orchestrator | Sunday 01 June 2025 03:28:27 +0000 (0:00:00.586) 0:03:23.158 *********** 2025-06-01 03:28:28.261077 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:28.263017 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:28.263389 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:28.265155 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:28.265422 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:28.266737 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:28.267820 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:28.268970 | orchestrator | 2025-06-01 03:28:28.269773 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-01 03:28:28.270671 | orchestrator | Sunday 01 June 2025 03:28:28 +0000 (0:00:00.578) 0:03:23.736 *********** 2025-06-01 03:28:28.844369 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:28.845048 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:28.846602 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:28.847328 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:28.848072 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:28.848814 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:28.849006 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:28.849832 | orchestrator | 2025-06-01 03:28:28.850139 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-01 03:28:28.850828 | orchestrator | Sunday 01 June 2025 03:28:28 +0000 (0:00:00.583) 0:03:24.319 *********** 2025-06-01 03:28:29.764693 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747073.8701718, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.765799 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747123.0690043, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.765988 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747115.536298, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.768150 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747106.5473537, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.769667 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747120.049061, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.770435 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747129.4117556, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.771396 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748747111.628335, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.772399 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747101.4575171, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.772986 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747022.7813845, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.773561 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747024.979781, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.774004 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747032.8701088, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.774676 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747023.1186604, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.775075 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747025.2306757, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.775977 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748747029.7625203, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 03:28:29.776263 | orchestrator | 2025-06-01 03:28:29.776680 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-01 03:28:29.777078 | orchestrator | Sunday 01 June 2025 03:28:29 +0000 (0:00:00.919) 0:03:25.239 *********** 2025-06-01 03:28:30.830361 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:30.830881 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:30.831575 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:30.832440 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:30.833747 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:30.834657 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:30.834941 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:30.835471 | orchestrator | 2025-06-01 03:28:30.836075 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-01 03:28:30.836687 | orchestrator | Sunday 01 June 2025 03:28:30 +0000 (0:00:01.067) 0:03:26.307 *********** 2025-06-01 03:28:31.934371 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:31.935426 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:31.936412 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:31.937539 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:31.938428 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:31.939159 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:31.939869 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:31.940820 | orchestrator | 2025-06-01 03:28:31.941462 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-01 03:28:31.941882 | orchestrator | Sunday 01 June 2025 03:28:31 +0000 (0:00:01.102) 0:03:27.409 *********** 2025-06-01 03:28:33.065115 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:33.065896 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:33.067657 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:33.068249 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:33.069001 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:33.069715 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:33.070423 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:33.071155 | orchestrator | 2025-06-01 03:28:33.071790 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-01 03:28:33.072403 | orchestrator | Sunday 01 June 2025 03:28:33 +0000 (0:00:01.130) 0:03:28.540 *********** 2025-06-01 03:28:33.123848 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:28:33.152367 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:28:33.195167 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:28:33.225980 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:28:33.256704 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:28:33.317432 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:28:33.318154 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:28:33.318443 | orchestrator | 2025-06-01 03:28:33.319253 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-01 03:28:33.319898 | orchestrator | Sunday 01 June 2025 03:28:33 +0000 (0:00:00.253) 0:03:28.793 *********** 2025-06-01 03:28:34.014821 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:34.015139 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:34.015472 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:34.019592 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:34.019992 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:34.020562 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:34.021089 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:34.021978 | orchestrator | 2025-06-01 03:28:34.022662 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-01 03:28:34.023408 | orchestrator | Sunday 01 June 2025 03:28:34 +0000 (0:00:00.696) 0:03:29.490 *********** 2025-06-01 03:28:34.389090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:28:34.389583 | orchestrator | 2025-06-01 03:28:34.389849 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-01 03:28:34.390998 | orchestrator | Sunday 01 June 2025 03:28:34 +0000 (0:00:00.374) 0:03:29.865 *********** 2025-06-01 03:28:41.941211 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:41.941332 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:41.941349 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:41.942065 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:41.944483 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:41.945465 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:41.946008 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:41.946820 | orchestrator | 2025-06-01 03:28:41.947501 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-01 03:28:41.949011 | orchestrator | Sunday 01 June 2025 03:28:41 +0000 (0:00:07.548) 0:03:37.413 *********** 2025-06-01 03:28:43.021213 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:43.022410 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:43.023641 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:43.024327 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:43.025594 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:43.026414 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:43.027444 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:43.027635 | orchestrator | 2025-06-01 03:28:43.028248 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-01 03:28:43.029053 | orchestrator | Sunday 01 June 2025 03:28:43 +0000 (0:00:01.083) 0:03:38.497 *********** 2025-06-01 03:28:44.023325 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:44.024002 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:44.024856 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:44.025920 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:44.027478 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:44.028020 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:44.028567 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:44.029338 | orchestrator | 2025-06-01 03:28:44.029981 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-01 03:28:44.030603 | orchestrator | Sunday 01 June 2025 03:28:44 +0000 (0:00:01.000) 0:03:39.498 *********** 2025-06-01 03:28:44.481172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:28:44.481270 | orchestrator | 2025-06-01 03:28:44.481438 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-01 03:28:44.482053 | orchestrator | Sunday 01 June 2025 03:28:44 +0000 (0:00:00.459) 0:03:39.957 *********** 2025-06-01 03:28:52.439614 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:52.440293 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:52.441024 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:52.443054 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:52.444607 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:52.445115 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:52.446610 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:52.446635 | orchestrator | 2025-06-01 03:28:52.447539 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-01 03:28:52.448351 | orchestrator | Sunday 01 June 2025 03:28:52 +0000 (0:00:07.957) 0:03:47.915 *********** 2025-06-01 03:28:53.016049 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:53.016858 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:53.017955 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:53.019149 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:53.019897 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:53.020771 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:53.021689 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:53.022792 | orchestrator | 2025-06-01 03:28:53.024093 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-01 03:28:53.025007 | orchestrator | Sunday 01 June 2025 03:28:53 +0000 (0:00:00.575) 0:03:48.491 *********** 2025-06-01 03:28:54.053956 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:54.054902 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:54.055383 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:54.057815 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:54.059155 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:54.059468 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:54.060213 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:54.061046 | orchestrator | 2025-06-01 03:28:54.062056 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-01 03:28:54.062316 | orchestrator | Sunday 01 June 2025 03:28:54 +0000 (0:00:01.038) 0:03:49.529 *********** 2025-06-01 03:28:55.126905 | orchestrator | changed: [testbed-manager] 2025-06-01 03:28:55.127008 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:28:55.128315 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:28:55.128339 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:28:55.128351 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:28:55.128924 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:28:55.129950 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:28:55.130491 | orchestrator | 2025-06-01 03:28:55.131370 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-01 03:28:55.131862 | orchestrator | Sunday 01 June 2025 03:28:55 +0000 (0:00:01.074) 0:03:50.603 *********** 2025-06-01 03:28:55.242001 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:55.279987 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:55.324282 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:55.357566 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:55.429914 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:55.431074 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:55.432648 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:55.433591 | orchestrator | 2025-06-01 03:28:55.434254 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-01 03:28:55.435093 | orchestrator | Sunday 01 June 2025 03:28:55 +0000 (0:00:00.302) 0:03:50.906 *********** 2025-06-01 03:28:55.552409 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:55.584739 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:55.620026 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:55.653786 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:55.734477 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:55.734962 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:55.735569 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:55.736237 | orchestrator | 2025-06-01 03:28:55.737059 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-01 03:28:55.737207 | orchestrator | Sunday 01 June 2025 03:28:55 +0000 (0:00:00.304) 0:03:51.211 *********** 2025-06-01 03:28:55.836121 | orchestrator | ok: [testbed-manager] 2025-06-01 03:28:55.868283 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:28:55.906088 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:28:55.935888 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:28:56.028057 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:28:56.028288 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:28:56.029086 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:28:56.029995 | orchestrator | 2025-06-01 03:28:56.030633 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-01 03:28:56.031047 | orchestrator | Sunday 01 June 2025 03:28:56 +0000 (0:00:00.293) 0:03:51.504 *********** 2025-06-01 03:29:01.661096 | orchestrator | ok: [testbed-manager] 2025-06-01 03:29:01.661309 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:29:01.661371 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:29:01.661857 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:29:01.662884 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:29:01.663404 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:29:01.663964 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:29:01.664455 | orchestrator | 2025-06-01 03:29:01.665306 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-01 03:29:01.665853 | orchestrator | Sunday 01 June 2025 03:29:01 +0000 (0:00:05.632) 0:03:57.137 *********** 2025-06-01 03:29:02.047761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:29:02.048321 | orchestrator | 2025-06-01 03:29:02.049718 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-01 03:29:02.051500 | orchestrator | Sunday 01 June 2025 03:29:02 +0000 (0:00:00.384) 0:03:57.521 *********** 2025-06-01 03:29:02.131740 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.131922 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-01 03:29:02.133075 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.133726 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-01 03:29:02.168560 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:29:02.168977 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.227084 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:29:02.227621 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-01 03:29:02.228368 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.229916 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-01 03:29:02.263812 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:29:02.307815 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:29:02.307879 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.307895 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-01 03:29:02.308995 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.394318 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-01 03:29:02.395203 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:29:02.396117 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:29:02.397058 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-01 03:29:02.398538 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-01 03:29:02.399596 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:29:02.400452 | orchestrator | 2025-06-01 03:29:02.400943 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-01 03:29:02.401471 | orchestrator | Sunday 01 June 2025 03:29:02 +0000 (0:00:00.348) 0:03:57.870 *********** 2025-06-01 03:29:02.774669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:29:02.775493 | orchestrator | 2025-06-01 03:29:02.776247 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-01 03:29:02.777976 | orchestrator | Sunday 01 June 2025 03:29:02 +0000 (0:00:00.379) 0:03:58.249 *********** 2025-06-01 03:29:02.850462 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-01 03:29:02.850653 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-01 03:29:02.883369 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:29:02.883447 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-01 03:29:02.919909 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:29:02.960501 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-01 03:29:02.960793 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:29:03.001578 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-01 03:29:03.001644 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:29:03.097368 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-01 03:29:03.098955 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:29:03.099957 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:29:03.100851 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-01 03:29:03.101246 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:29:03.102318 | orchestrator | 2025-06-01 03:29:03.102970 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-01 03:29:03.103486 | orchestrator | Sunday 01 June 2025 03:29:03 +0000 (0:00:00.324) 0:03:58.573 *********** 2025-06-01 03:29:03.594428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:29:03.594580 | orchestrator | 2025-06-01 03:29:03.594597 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-01 03:29:03.594610 | orchestrator | Sunday 01 June 2025 03:29:03 +0000 (0:00:00.494) 0:03:59.068 *********** 2025-06-01 03:29:36.786439 | orchestrator | changed: [testbed-manager] 2025-06-01 03:29:36.786609 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:29:36.786628 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:29:36.786640 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:29:36.786652 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:29:36.786663 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:29:36.786674 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:29:36.786685 | orchestrator | 2025-06-01 03:29:36.786928 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-01 03:29:36.788612 | orchestrator | Sunday 01 June 2025 03:29:36 +0000 (0:00:33.187) 0:04:32.256 *********** 2025-06-01 03:29:44.324051 | orchestrator | changed: [testbed-manager] 2025-06-01 03:29:44.327115 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:29:44.327178 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:29:44.331029 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:29:44.331081 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:29:44.334359 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:29:44.334887 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:29:44.335758 | orchestrator | 2025-06-01 03:29:44.338628 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-01 03:29:44.339167 | orchestrator | Sunday 01 June 2025 03:29:44 +0000 (0:00:07.539) 0:04:39.795 *********** 2025-06-01 03:29:51.616226 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:29:51.616349 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:29:51.618468 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:29:51.619132 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:29:51.622103 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:29:51.623039 | orchestrator | changed: [testbed-manager] 2025-06-01 03:29:51.623963 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:29:51.624956 | orchestrator | 2025-06-01 03:29:51.625632 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-01 03:29:51.626682 | orchestrator | Sunday 01 June 2025 03:29:51 +0000 (0:00:07.295) 0:04:47.091 *********** 2025-06-01 03:29:53.213450 | orchestrator | ok: [testbed-manager] 2025-06-01 03:29:53.213591 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:29:53.213608 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:29:53.213704 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:29:53.215264 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:29:53.215790 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:29:53.216470 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:29:53.217278 | orchestrator | 2025-06-01 03:29:53.217991 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-01 03:29:53.218600 | orchestrator | Sunday 01 June 2025 03:29:53 +0000 (0:00:01.595) 0:04:48.687 *********** 2025-06-01 03:29:58.899621 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:29:58.899800 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:29:58.901239 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:29:58.902527 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:29:58.903199 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:29:58.904457 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:29:58.904955 | orchestrator | changed: [testbed-manager] 2025-06-01 03:29:58.905412 | orchestrator | 2025-06-01 03:29:58.906239 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-01 03:29:58.907734 | orchestrator | Sunday 01 June 2025 03:29:58 +0000 (0:00:05.685) 0:04:54.373 *********** 2025-06-01 03:29:59.385581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:29:59.385755 | orchestrator | 2025-06-01 03:29:59.386782 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-01 03:29:59.387453 | orchestrator | Sunday 01 June 2025 03:29:59 +0000 (0:00:00.487) 0:04:54.860 *********** 2025-06-01 03:30:00.157734 | orchestrator | changed: [testbed-manager] 2025-06-01 03:30:00.158712 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:00.160444 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:00.161741 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:00.162831 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:00.163755 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:00.164708 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:00.165808 | orchestrator | 2025-06-01 03:30:00.166603 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-01 03:30:00.167646 | orchestrator | Sunday 01 June 2025 03:30:00 +0000 (0:00:00.770) 0:04:55.630 *********** 2025-06-01 03:30:01.802063 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:30:01.802244 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:30:01.802665 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:30:01.805288 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:30:01.805990 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:01.807280 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:30:01.809046 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:30:01.809919 | orchestrator | 2025-06-01 03:30:01.810994 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-01 03:30:01.811680 | orchestrator | Sunday 01 June 2025 03:30:01 +0000 (0:00:01.645) 0:04:57.276 *********** 2025-06-01 03:30:02.609188 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:02.610079 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:02.611423 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:02.612931 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:02.613589 | orchestrator | changed: [testbed-manager] 2025-06-01 03:30:02.614732 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:02.615376 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:02.616473 | orchestrator | 2025-06-01 03:30:02.617299 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-01 03:30:02.617567 | orchestrator | Sunday 01 June 2025 03:30:02 +0000 (0:00:00.807) 0:04:58.084 *********** 2025-06-01 03:30:02.740627 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:30:02.803761 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:02.845216 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:02.880667 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:02.949852 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:02.950406 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:02.951908 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:02.952693 | orchestrator | 2025-06-01 03:30:02.955230 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-01 03:30:02.955841 | orchestrator | Sunday 01 June 2025 03:30:02 +0000 (0:00:00.341) 0:04:58.425 *********** 2025-06-01 03:30:03.027824 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:30:03.063794 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:03.109857 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:03.144955 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:03.191132 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:03.383753 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:03.385000 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:03.385714 | orchestrator | 2025-06-01 03:30:03.386934 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-01 03:30:03.387686 | orchestrator | Sunday 01 June 2025 03:30:03 +0000 (0:00:00.430) 0:04:58.856 *********** 2025-06-01 03:30:03.494380 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:03.531810 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:30:03.570318 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:30:03.609021 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:30:03.711275 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:30:03.712341 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:30:03.713211 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:30:03.714215 | orchestrator | 2025-06-01 03:30:03.715250 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-01 03:30:03.716738 | orchestrator | Sunday 01 June 2025 03:30:03 +0000 (0:00:00.330) 0:04:59.187 *********** 2025-06-01 03:30:03.825567 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:30:03.874057 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:03.908954 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:03.950673 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:04.018326 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:04.018697 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:04.019626 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:04.021096 | orchestrator | 2025-06-01 03:30:04.021612 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-01 03:30:04.021982 | orchestrator | Sunday 01 June 2025 03:30:04 +0000 (0:00:00.305) 0:04:59.493 *********** 2025-06-01 03:30:04.123710 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:04.160971 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:30:04.216992 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:30:04.253212 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:30:04.344895 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:30:04.345151 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:30:04.346576 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:30:04.347183 | orchestrator | 2025-06-01 03:30:04.348306 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-01 03:30:04.349155 | orchestrator | Sunday 01 June 2025 03:30:04 +0000 (0:00:00.326) 0:04:59.819 *********** 2025-06-01 03:30:04.467116 | orchestrator | ok: [testbed-manager] =>  2025-06-01 03:30:04.467217 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.503854 | orchestrator | ok: [testbed-node-3] =>  2025-06-01 03:30:04.504062 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.541326 | orchestrator | ok: [testbed-node-4] =>  2025-06-01 03:30:04.541442 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.574694 | orchestrator | ok: [testbed-node-5] =>  2025-06-01 03:30:04.574771 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.636832 | orchestrator | ok: [testbed-node-0] =>  2025-06-01 03:30:04.637107 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.638094 | orchestrator | ok: [testbed-node-1] =>  2025-06-01 03:30:04.638782 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.639545 | orchestrator | ok: [testbed-node-2] =>  2025-06-01 03:30:04.640170 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 03:30:04.640803 | orchestrator | 2025-06-01 03:30:04.641621 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-01 03:30:04.642127 | orchestrator | Sunday 01 June 2025 03:30:04 +0000 (0:00:00.294) 0:05:00.113 *********** 2025-06-01 03:30:04.763754 | orchestrator | ok: [testbed-manager] =>  2025-06-01 03:30:04.763832 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:04.914473 | orchestrator | ok: [testbed-node-3] =>  2025-06-01 03:30:04.915556 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:04.958668 | orchestrator | ok: [testbed-node-4] =>  2025-06-01 03:30:04.959326 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:04.995379 | orchestrator | ok: [testbed-node-5] =>  2025-06-01 03:30:04.995875 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:05.070233 | orchestrator | ok: [testbed-node-0] =>  2025-06-01 03:30:05.070449 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:05.072222 | orchestrator | ok: [testbed-node-1] =>  2025-06-01 03:30:05.073451 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:05.074177 | orchestrator | ok: [testbed-node-2] =>  2025-06-01 03:30:05.075376 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 03:30:05.075915 | orchestrator | 2025-06-01 03:30:05.077216 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-01 03:30:05.078233 | orchestrator | Sunday 01 June 2025 03:30:05 +0000 (0:00:00.431) 0:05:00.545 *********** 2025-06-01 03:30:05.182254 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:30:05.217825 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:05.250685 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:05.285203 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:05.342611 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:05.343382 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:05.344092 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:05.345426 | orchestrator | 2025-06-01 03:30:05.346393 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-01 03:30:05.347823 | orchestrator | Sunday 01 June 2025 03:30:05 +0000 (0:00:00.273) 0:05:00.819 *********** 2025-06-01 03:30:05.437982 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:30:05.483360 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:05.524914 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:05.559842 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:05.597974 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:05.666398 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:05.667088 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:05.668146 | orchestrator | 2025-06-01 03:30:05.669356 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-01 03:30:05.670458 | orchestrator | Sunday 01 June 2025 03:30:05 +0000 (0:00:00.323) 0:05:01.142 *********** 2025-06-01 03:30:06.120065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:30:06.121190 | orchestrator | 2025-06-01 03:30:06.122437 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-01 03:30:06.123334 | orchestrator | Sunday 01 June 2025 03:30:06 +0000 (0:00:00.451) 0:05:01.593 *********** 2025-06-01 03:30:06.940874 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:30:06.941236 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:30:06.942715 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:06.944013 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:30:06.944863 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:30:06.945246 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:30:06.947217 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:30:06.948056 | orchestrator | 2025-06-01 03:30:06.948684 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-01 03:30:06.949112 | orchestrator | Sunday 01 June 2025 03:30:06 +0000 (0:00:00.821) 0:05:02.414 *********** 2025-06-01 03:30:09.780486 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:30:09.781651 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:30:09.783539 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:30:09.784079 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:30:09.785416 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:30:09.786468 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:30:09.787469 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:09.788971 | orchestrator | 2025-06-01 03:30:09.789548 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-01 03:30:09.790591 | orchestrator | Sunday 01 June 2025 03:30:09 +0000 (0:00:02.840) 0:05:05.255 *********** 2025-06-01 03:30:09.874679 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-01 03:30:09.875855 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-01 03:30:09.965630 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-01 03:30:09.965720 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-01 03:30:09.965733 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-01 03:30:10.049715 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:30:10.051752 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-01 03:30:10.054333 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-01 03:30:10.054368 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-01 03:30:10.054604 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-01 03:30:10.126150 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:10.130434 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-01 03:30:10.131004 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-01 03:30:10.355434 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:10.359773 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-01 03:30:10.360741 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-01 03:30:10.361714 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-01 03:30:10.362621 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-01 03:30:10.427081 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:10.427666 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-01 03:30:10.428835 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-01 03:30:10.581874 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:10.581955 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-01 03:30:10.581964 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:10.582063 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-01 03:30:10.582472 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-01 03:30:10.582964 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-01 03:30:10.583436 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:10.583826 | orchestrator | 2025-06-01 03:30:10.584366 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-01 03:30:10.584758 | orchestrator | Sunday 01 June 2025 03:30:10 +0000 (0:00:00.800) 0:05:06.055 *********** 2025-06-01 03:30:21.969274 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:21.969394 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:21.969412 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:21.969423 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:21.969998 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:21.970363 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:21.971275 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:21.972052 | orchestrator | 2025-06-01 03:30:21.972706 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-01 03:30:21.975802 | orchestrator | Sunday 01 June 2025 03:30:21 +0000 (0:00:11.386) 0:05:17.442 *********** 2025-06-01 03:30:22.978257 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:22.978457 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:22.979827 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:22.980030 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:22.980777 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:22.982338 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:22.984015 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:22.984707 | orchestrator | 2025-06-01 03:30:22.985550 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-01 03:30:22.986233 | orchestrator | Sunday 01 June 2025 03:30:22 +0000 (0:00:01.011) 0:05:18.453 *********** 2025-06-01 03:30:30.270441 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:30.273549 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:30.273997 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:30.274571 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:30.275208 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:30.275828 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:30.276361 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:30.276947 | orchestrator | 2025-06-01 03:30:30.277578 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-01 03:30:30.278147 | orchestrator | Sunday 01 June 2025 03:30:30 +0000 (0:00:07.290) 0:05:25.743 *********** 2025-06-01 03:30:33.361718 | orchestrator | changed: [testbed-manager] 2025-06-01 03:30:33.361958 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:33.363387 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:33.363771 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:33.365118 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:33.365798 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:33.366951 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:33.367513 | orchestrator | 2025-06-01 03:30:33.368191 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-01 03:30:33.369452 | orchestrator | Sunday 01 June 2025 03:30:33 +0000 (0:00:03.093) 0:05:28.837 *********** 2025-06-01 03:30:34.910982 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:34.912314 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:34.912367 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:34.912723 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:34.913414 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:34.914097 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:34.914767 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:34.915226 | orchestrator | 2025-06-01 03:30:34.915873 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-01 03:30:34.916551 | orchestrator | Sunday 01 June 2025 03:30:34 +0000 (0:00:01.546) 0:05:30.383 *********** 2025-06-01 03:30:36.279800 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:36.280898 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:36.281104 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:36.282775 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:36.283861 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:36.284594 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:36.285502 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:36.286108 | orchestrator | 2025-06-01 03:30:36.287105 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-01 03:30:36.288031 | orchestrator | Sunday 01 June 2025 03:30:36 +0000 (0:00:01.370) 0:05:31.753 *********** 2025-06-01 03:30:36.477042 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:30:36.549387 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:30:36.614176 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:30:36.689044 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:30:36.873946 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:30:36.874163 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:30:36.875043 | orchestrator | changed: [testbed-manager] 2025-06-01 03:30:36.876027 | orchestrator | 2025-06-01 03:30:36.876832 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-01 03:30:36.878075 | orchestrator | Sunday 01 June 2025 03:30:36 +0000 (0:00:00.597) 0:05:32.351 *********** 2025-06-01 03:30:46.438266 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:46.438387 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:46.438404 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:46.439196 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:46.439844 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:46.440801 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:46.442635 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:46.443779 | orchestrator | 2025-06-01 03:30:46.444701 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-01 03:30:46.445933 | orchestrator | Sunday 01 June 2025 03:30:46 +0000 (0:00:09.558) 0:05:41.909 *********** 2025-06-01 03:30:47.318383 | orchestrator | changed: [testbed-manager] 2025-06-01 03:30:47.318709 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:47.319968 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:47.320443 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:47.321580 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:47.322131 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:47.322881 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:47.323611 | orchestrator | 2025-06-01 03:30:47.324259 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-01 03:30:47.325236 | orchestrator | Sunday 01 June 2025 03:30:47 +0000 (0:00:00.883) 0:05:42.792 *********** 2025-06-01 03:30:56.017421 | orchestrator | ok: [testbed-manager] 2025-06-01 03:30:56.017576 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:30:56.018102 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:30:56.018744 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:30:56.021347 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:30:56.022757 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:30:56.023372 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:30:56.023820 | orchestrator | 2025-06-01 03:30:56.024520 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-01 03:30:56.025238 | orchestrator | Sunday 01 June 2025 03:30:56 +0000 (0:00:08.701) 0:05:51.493 *********** 2025-06-01 03:31:06.514163 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:06.514338 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:06.514357 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:06.517398 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:06.518057 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:06.519663 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:06.520978 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:06.522589 | orchestrator | 2025-06-01 03:31:06.523221 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-01 03:31:06.523879 | orchestrator | Sunday 01 June 2025 03:31:06 +0000 (0:00:10.490) 0:06:01.983 *********** 2025-06-01 03:31:06.863113 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-01 03:31:07.706838 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-01 03:31:07.706940 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-01 03:31:07.707781 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-01 03:31:07.708634 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-01 03:31:07.709946 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-01 03:31:07.711161 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-01 03:31:07.712210 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-01 03:31:07.712936 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-01 03:31:07.713430 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-01 03:31:07.713974 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-01 03:31:07.714548 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-01 03:31:07.715118 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-01 03:31:07.715726 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-01 03:31:07.716381 | orchestrator | 2025-06-01 03:31:07.716872 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-01 03:31:07.717334 | orchestrator | Sunday 01 June 2025 03:31:07 +0000 (0:00:01.194) 0:06:03.178 *********** 2025-06-01 03:31:07.854260 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:07.919590 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:07.987174 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:08.050666 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:08.117661 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:08.240644 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:08.241736 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:08.243009 | orchestrator | 2025-06-01 03:31:08.243646 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-01 03:31:08.244603 | orchestrator | Sunday 01 June 2025 03:31:08 +0000 (0:00:00.538) 0:06:03.717 *********** 2025-06-01 03:31:12.049656 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:12.050443 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:12.050539 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:12.050554 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:12.050583 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:12.050664 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:12.050938 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:12.052994 | orchestrator | 2025-06-01 03:31:12.053809 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-01 03:31:12.054236 | orchestrator | Sunday 01 June 2025 03:31:12 +0000 (0:00:03.802) 0:06:07.520 *********** 2025-06-01 03:31:12.207613 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:12.279677 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:12.343057 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:12.415642 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:12.491450 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:12.588020 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:12.589098 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:12.590583 | orchestrator | 2025-06-01 03:31:12.593013 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-01 03:31:12.593040 | orchestrator | Sunday 01 June 2025 03:31:12 +0000 (0:00:00.542) 0:06:08.062 *********** 2025-06-01 03:31:12.666115 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-01 03:31:12.666291 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-01 03:31:12.734055 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:12.734436 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-01 03:31:12.734908 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-01 03:31:12.801764 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:12.803203 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-01 03:31:12.806005 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-01 03:31:12.886489 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:12.886679 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-01 03:31:12.887448 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-01 03:31:12.960912 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:12.961665 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-01 03:31:12.962688 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-01 03:31:13.033545 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:13.034171 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-01 03:31:13.034878 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-01 03:31:13.153805 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:13.154146 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-01 03:31:13.155389 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-01 03:31:13.159231 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:13.159270 | orchestrator | 2025-06-01 03:31:13.159284 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-01 03:31:13.159297 | orchestrator | Sunday 01 June 2025 03:31:13 +0000 (0:00:00.567) 0:06:08.630 *********** 2025-06-01 03:31:13.282252 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:13.352117 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:13.416350 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:13.481804 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:13.551383 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:13.654346 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:13.657186 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:13.657275 | orchestrator | 2025-06-01 03:31:13.657291 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-01 03:31:13.657373 | orchestrator | Sunday 01 June 2025 03:31:13 +0000 (0:00:00.496) 0:06:09.127 *********** 2025-06-01 03:31:13.787185 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:13.849876 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:13.913380 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:13.982199 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:14.044937 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:14.133895 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:14.134677 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:14.135623 | orchestrator | 2025-06-01 03:31:14.139127 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-01 03:31:14.139160 | orchestrator | Sunday 01 June 2025 03:31:14 +0000 (0:00:00.481) 0:06:09.608 *********** 2025-06-01 03:31:14.270286 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:14.334573 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:14.405819 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:14.642396 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:14.709561 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:14.835445 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:14.836086 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:14.837058 | orchestrator | 2025-06-01 03:31:14.837961 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-01 03:31:14.838734 | orchestrator | Sunday 01 June 2025 03:31:14 +0000 (0:00:00.702) 0:06:10.310 *********** 2025-06-01 03:31:16.428512 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:16.429109 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:16.430271 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:16.430759 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:16.432554 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:16.433563 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:16.434270 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:16.435337 | orchestrator | 2025-06-01 03:31:16.435839 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-01 03:31:16.436266 | orchestrator | Sunday 01 June 2025 03:31:16 +0000 (0:00:01.593) 0:06:11.904 *********** 2025-06-01 03:31:17.333099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:31:17.333506 | orchestrator | 2025-06-01 03:31:17.334097 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-01 03:31:17.334904 | orchestrator | Sunday 01 June 2025 03:31:17 +0000 (0:00:00.900) 0:06:12.805 *********** 2025-06-01 03:31:17.775276 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:18.202384 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:18.202587 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:18.203725 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:18.204864 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:18.206210 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:18.207386 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:18.208259 | orchestrator | 2025-06-01 03:31:18.209047 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-01 03:31:18.209607 | orchestrator | Sunday 01 June 2025 03:31:18 +0000 (0:00:00.871) 0:06:13.676 *********** 2025-06-01 03:31:18.627672 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:18.692602 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:19.277438 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:19.277690 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:19.279174 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:19.280081 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:19.282364 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:19.282391 | orchestrator | 2025-06-01 03:31:19.283164 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-01 03:31:19.283610 | orchestrator | Sunday 01 June 2025 03:31:19 +0000 (0:00:01.075) 0:06:14.751 *********** 2025-06-01 03:31:20.754757 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:20.754854 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:20.754869 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:20.755284 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:20.756621 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:20.757406 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:20.758302 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:20.759352 | orchestrator | 2025-06-01 03:31:20.760641 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-01 03:31:20.761091 | orchestrator | Sunday 01 June 2025 03:31:20 +0000 (0:00:01.477) 0:06:16.229 *********** 2025-06-01 03:31:20.894693 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:22.090826 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:22.090931 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:22.093792 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:22.094212 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:22.095367 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:22.096211 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:22.097171 | orchestrator | 2025-06-01 03:31:22.098355 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-01 03:31:22.099407 | orchestrator | Sunday 01 June 2025 03:31:22 +0000 (0:00:01.334) 0:06:17.563 *********** 2025-06-01 03:31:23.412914 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:23.413208 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:23.414353 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:23.415835 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:23.416513 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:23.417127 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:23.418089 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:23.418544 | orchestrator | 2025-06-01 03:31:23.419349 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-01 03:31:23.419813 | orchestrator | Sunday 01 June 2025 03:31:23 +0000 (0:00:01.322) 0:06:18.886 *********** 2025-06-01 03:31:24.816954 | orchestrator | changed: [testbed-manager] 2025-06-01 03:31:24.817064 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:24.818225 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:24.819879 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:24.820604 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:24.821733 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:24.822361 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:24.823297 | orchestrator | 2025-06-01 03:31:24.823674 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-01 03:31:24.824768 | orchestrator | Sunday 01 June 2025 03:31:24 +0000 (0:00:01.404) 0:06:20.290 *********** 2025-06-01 03:31:25.851906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:31:25.852091 | orchestrator | 2025-06-01 03:31:25.853675 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-01 03:31:25.855401 | orchestrator | Sunday 01 June 2025 03:31:25 +0000 (0:00:01.036) 0:06:21.327 *********** 2025-06-01 03:31:27.169571 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:27.169649 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:27.170325 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:27.171337 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:27.173948 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:27.174842 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:27.175567 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:27.176998 | orchestrator | 2025-06-01 03:31:27.177989 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-01 03:31:27.178587 | orchestrator | Sunday 01 June 2025 03:31:27 +0000 (0:00:01.318) 0:06:22.645 *********** 2025-06-01 03:31:28.344545 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:28.344689 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:28.344773 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:28.347102 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:28.348222 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:28.349352 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:28.350533 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:28.351667 | orchestrator | 2025-06-01 03:31:28.352101 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-01 03:31:28.353128 | orchestrator | Sunday 01 June 2025 03:31:28 +0000 (0:00:01.171) 0:06:23.817 *********** 2025-06-01 03:31:29.705087 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:29.706269 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:29.707902 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:29.708317 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:29.710766 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:29.711611 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:29.712856 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:29.713978 | orchestrator | 2025-06-01 03:31:29.715851 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-01 03:31:29.716793 | orchestrator | Sunday 01 June 2025 03:31:29 +0000 (0:00:01.360) 0:06:25.178 *********** 2025-06-01 03:31:30.818672 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:30.818780 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:30.818861 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:30.819579 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:30.819890 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:30.820882 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:30.821947 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:30.824292 | orchestrator | 2025-06-01 03:31:30.825488 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-01 03:31:30.826576 | orchestrator | Sunday 01 June 2025 03:31:30 +0000 (0:00:01.113) 0:06:26.292 *********** 2025-06-01 03:31:31.969244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:31:31.969956 | orchestrator | 2025-06-01 03:31:31.971616 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.973617 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.862) 0:06:27.155 *********** 2025-06-01 03:31:31.974388 | orchestrator | 2025-06-01 03:31:31.975081 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.975974 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.041) 0:06:27.196 *********** 2025-06-01 03:31:31.976906 | orchestrator | 2025-06-01 03:31:31.977542 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.978249 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.044) 0:06:27.241 *********** 2025-06-01 03:31:31.979149 | orchestrator | 2025-06-01 03:31:31.979548 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.980173 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.038) 0:06:27.279 *********** 2025-06-01 03:31:31.980650 | orchestrator | 2025-06-01 03:31:31.981340 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.981858 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.037) 0:06:27.317 *********** 2025-06-01 03:31:31.982647 | orchestrator | 2025-06-01 03:31:31.983385 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.985352 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.045) 0:06:27.363 *********** 2025-06-01 03:31:31.985673 | orchestrator | 2025-06-01 03:31:31.986246 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 03:31:31.986630 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.038) 0:06:27.401 *********** 2025-06-01 03:31:31.987032 | orchestrator | 2025-06-01 03:31:31.988045 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 03:31:31.988443 | orchestrator | Sunday 01 June 2025 03:31:31 +0000 (0:00:00.038) 0:06:27.440 *********** 2025-06-01 03:31:33.206216 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:33.206392 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:33.208707 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:33.209058 | orchestrator | 2025-06-01 03:31:33.211289 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-01 03:31:33.211544 | orchestrator | Sunday 01 June 2025 03:31:33 +0000 (0:00:01.239) 0:06:28.679 *********** 2025-06-01 03:31:34.518582 | orchestrator | changed: [testbed-manager] 2025-06-01 03:31:34.519289 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:34.521291 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:34.521679 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:34.523102 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:34.524425 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:34.525497 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:34.525891 | orchestrator | 2025-06-01 03:31:34.526897 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-01 03:31:34.528343 | orchestrator | Sunday 01 June 2025 03:31:34 +0000 (0:00:01.313) 0:06:29.992 *********** 2025-06-01 03:31:35.616747 | orchestrator | changed: [testbed-manager] 2025-06-01 03:31:35.617527 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:35.618774 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:35.619330 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:35.619885 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:35.620574 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:35.621161 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:35.621566 | orchestrator | 2025-06-01 03:31:35.622091 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-01 03:31:35.622608 | orchestrator | Sunday 01 June 2025 03:31:35 +0000 (0:00:01.097) 0:06:31.090 *********** 2025-06-01 03:31:35.761933 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:38.055170 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:38.055612 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:38.056239 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:38.057221 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:38.057769 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:38.058847 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:38.059486 | orchestrator | 2025-06-01 03:31:38.059936 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-01 03:31:38.060540 | orchestrator | Sunday 01 June 2025 03:31:38 +0000 (0:00:02.437) 0:06:33.527 *********** 2025-06-01 03:31:38.157241 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:38.157335 | orchestrator | 2025-06-01 03:31:38.158123 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-01 03:31:38.159056 | orchestrator | Sunday 01 June 2025 03:31:38 +0000 (0:00:00.104) 0:06:33.632 *********** 2025-06-01 03:31:39.225455 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:39.225696 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:39.226296 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:39.227153 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:39.227588 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:39.228686 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:39.229119 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:39.230270 | orchestrator | 2025-06-01 03:31:39.230809 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-01 03:31:39.231435 | orchestrator | Sunday 01 June 2025 03:31:39 +0000 (0:00:01.066) 0:06:34.699 *********** 2025-06-01 03:31:39.589191 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:39.665821 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:39.730543 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:39.800816 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:39.866564 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:39.981945 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:39.983611 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:39.984772 | orchestrator | 2025-06-01 03:31:39.985766 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-01 03:31:39.987100 | orchestrator | Sunday 01 June 2025 03:31:39 +0000 (0:00:00.757) 0:06:35.457 *********** 2025-06-01 03:31:40.873923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:31:40.874532 | orchestrator | 2025-06-01 03:31:40.875504 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-01 03:31:40.877019 | orchestrator | Sunday 01 June 2025 03:31:40 +0000 (0:00:00.892) 0:06:36.350 *********** 2025-06-01 03:31:41.288233 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:41.698839 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:41.700016 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:41.700061 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:41.700451 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:41.701705 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:41.702215 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:41.705615 | orchestrator | 2025-06-01 03:31:41.706773 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-01 03:31:41.707144 | orchestrator | Sunday 01 June 2025 03:31:41 +0000 (0:00:00.826) 0:06:37.176 *********** 2025-06-01 03:31:44.366305 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-01 03:31:44.368904 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-01 03:31:44.370148 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-01 03:31:44.372110 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-01 03:31:44.375180 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-01 03:31:44.376986 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-01 03:31:44.377771 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-01 03:31:44.378763 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-01 03:31:44.379684 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-01 03:31:44.381021 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-01 03:31:44.381827 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-01 03:31:44.382869 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-01 03:31:44.384054 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-01 03:31:44.384855 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-01 03:31:44.385834 | orchestrator | 2025-06-01 03:31:44.387328 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-01 03:31:44.388118 | orchestrator | Sunday 01 June 2025 03:31:44 +0000 (0:00:02.663) 0:06:39.840 *********** 2025-06-01 03:31:44.509334 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:44.573998 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:44.643359 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:44.705918 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:44.768348 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:44.878245 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:44.878357 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:44.879014 | orchestrator | 2025-06-01 03:31:44.879509 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-01 03:31:44.880170 | orchestrator | Sunday 01 June 2025 03:31:44 +0000 (0:00:00.514) 0:06:40.354 *********** 2025-06-01 03:31:45.683321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:31:45.684161 | orchestrator | 2025-06-01 03:31:45.684642 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-01 03:31:45.685433 | orchestrator | Sunday 01 June 2025 03:31:45 +0000 (0:00:00.797) 0:06:41.152 *********** 2025-06-01 03:31:46.249314 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:46.313700 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:46.752923 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:46.753685 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:46.755003 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:46.755923 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:46.761371 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:46.761959 | orchestrator | 2025-06-01 03:31:46.763216 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-01 03:31:46.763666 | orchestrator | Sunday 01 June 2025 03:31:46 +0000 (0:00:01.073) 0:06:42.226 *********** 2025-06-01 03:31:47.181334 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:47.286375 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:47.668774 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:47.668876 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:47.669714 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:47.670300 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:47.671691 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:47.672620 | orchestrator | 2025-06-01 03:31:47.673411 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-01 03:31:47.674164 | orchestrator | Sunday 01 June 2025 03:31:47 +0000 (0:00:00.915) 0:06:43.141 *********** 2025-06-01 03:31:47.803907 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:47.866425 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:47.934365 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:48.003552 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:48.078367 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:48.173814 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:48.174795 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:48.175527 | orchestrator | 2025-06-01 03:31:48.176051 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-01 03:31:48.176865 | orchestrator | Sunday 01 June 2025 03:31:48 +0000 (0:00:00.507) 0:06:43.648 *********** 2025-06-01 03:31:49.541022 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:49.542413 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:31:49.544903 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:31:49.545673 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:31:49.547001 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:31:49.547742 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:31:49.548797 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:31:49.549666 | orchestrator | 2025-06-01 03:31:49.550385 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-01 03:31:49.550868 | orchestrator | Sunday 01 June 2025 03:31:49 +0000 (0:00:01.366) 0:06:45.015 *********** 2025-06-01 03:31:49.664554 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:31:49.734776 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:31:49.797334 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:31:49.859592 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:31:49.926861 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:31:50.027574 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:31:50.027668 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:31:50.027765 | orchestrator | 2025-06-01 03:31:50.028291 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-01 03:31:50.029015 | orchestrator | Sunday 01 June 2025 03:31:50 +0000 (0:00:00.486) 0:06:45.501 *********** 2025-06-01 03:31:57.427813 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:57.427932 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:57.428916 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:57.428939 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:57.428952 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:57.428964 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:57.429005 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:57.429018 | orchestrator | 2025-06-01 03:31:57.429723 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-01 03:31:57.431273 | orchestrator | Sunday 01 June 2025 03:31:57 +0000 (0:00:07.399) 0:06:52.901 *********** 2025-06-01 03:31:58.760423 | orchestrator | ok: [testbed-manager] 2025-06-01 03:31:58.760596 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:31:58.760684 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:31:58.761604 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:31:58.762259 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:31:58.763151 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:31:58.763625 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:31:58.764362 | orchestrator | 2025-06-01 03:31:58.765726 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-01 03:31:58.766971 | orchestrator | Sunday 01 June 2025 03:31:58 +0000 (0:00:01.333) 0:06:54.234 *********** 2025-06-01 03:32:00.493048 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:00.495025 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:00.495491 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:00.496004 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:00.498004 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:00.499578 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:00.500001 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:00.501009 | orchestrator | 2025-06-01 03:32:00.502311 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-01 03:32:00.502335 | orchestrator | Sunday 01 June 2025 03:32:00 +0000 (0:00:01.729) 0:06:55.963 *********** 2025-06-01 03:32:02.172187 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:02.174948 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:02.175544 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:02.176246 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:02.176860 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:02.177511 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:02.178238 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:02.179064 | orchestrator | 2025-06-01 03:32:02.180113 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 03:32:02.181136 | orchestrator | Sunday 01 June 2025 03:32:02 +0000 (0:00:01.649) 0:06:57.613 *********** 2025-06-01 03:32:02.595385 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:03.234541 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:03.235139 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:03.236605 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:03.237971 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:03.239664 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:03.240343 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:03.241229 | orchestrator | 2025-06-01 03:32:03.242294 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 03:32:03.243800 | orchestrator | Sunday 01 June 2025 03:32:03 +0000 (0:00:01.097) 0:06:58.711 *********** 2025-06-01 03:32:03.360810 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:32:03.432955 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:32:03.496254 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:32:03.559421 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:32:03.627947 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:32:04.008149 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:32:04.008847 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:32:04.009789 | orchestrator | 2025-06-01 03:32:04.010601 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-01 03:32:04.011540 | orchestrator | Sunday 01 June 2025 03:32:03 +0000 (0:00:00.772) 0:06:59.483 *********** 2025-06-01 03:32:04.150548 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:32:04.215703 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:32:04.282962 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:32:04.345710 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:32:04.407776 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:32:04.506790 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:32:04.506956 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:32:04.507815 | orchestrator | 2025-06-01 03:32:04.508608 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-01 03:32:04.509447 | orchestrator | Sunday 01 June 2025 03:32:04 +0000 (0:00:00.500) 0:06:59.983 *********** 2025-06-01 03:32:04.637102 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:04.705901 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:04.769008 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:04.830716 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:05.058218 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:05.164739 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:05.165196 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:05.166533 | orchestrator | 2025-06-01 03:32:05.167162 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-01 03:32:05.168427 | orchestrator | Sunday 01 June 2025 03:32:05 +0000 (0:00:00.656) 0:07:00.640 *********** 2025-06-01 03:32:05.298843 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:05.364296 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:05.427285 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:05.495825 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:05.559297 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:05.665917 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:05.666514 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:05.667775 | orchestrator | 2025-06-01 03:32:05.668477 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-01 03:32:05.669403 | orchestrator | Sunday 01 June 2025 03:32:05 +0000 (0:00:00.500) 0:07:01.140 *********** 2025-06-01 03:32:05.796250 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:05.862307 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:05.929647 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:05.992169 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:06.054229 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:06.158297 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:06.159656 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:06.160419 | orchestrator | 2025-06-01 03:32:06.161847 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-01 03:32:06.162915 | orchestrator | Sunday 01 June 2025 03:32:06 +0000 (0:00:00.494) 0:07:01.635 *********** 2025-06-01 03:32:11.732568 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:11.732684 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:11.732700 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:11.732826 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:11.733937 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:11.735436 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:11.735988 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:11.736642 | orchestrator | 2025-06-01 03:32:11.737642 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-01 03:32:11.738274 | orchestrator | Sunday 01 June 2025 03:32:11 +0000 (0:00:05.570) 0:07:07.205 *********** 2025-06-01 03:32:11.884604 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:32:11.946698 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:32:12.011802 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:32:12.081550 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:32:12.139764 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:32:12.250612 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:32:12.251583 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:32:12.252309 | orchestrator | 2025-06-01 03:32:12.253314 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-01 03:32:12.254004 | orchestrator | Sunday 01 June 2025 03:32:12 +0000 (0:00:00.521) 0:07:07.726 *********** 2025-06-01 03:32:13.218427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:32:13.219328 | orchestrator | 2025-06-01 03:32:13.220116 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-01 03:32:13.222165 | orchestrator | Sunday 01 June 2025 03:32:13 +0000 (0:00:00.966) 0:07:08.693 *********** 2025-06-01 03:32:14.925065 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:14.925357 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:14.926317 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:14.928111 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:14.928858 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:14.929764 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:14.931558 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:14.932293 | orchestrator | 2025-06-01 03:32:14.932964 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-01 03:32:14.933496 | orchestrator | Sunday 01 June 2025 03:32:14 +0000 (0:00:01.705) 0:07:10.399 *********** 2025-06-01 03:32:16.026894 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:16.027255 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:16.028400 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:16.029421 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:16.030152 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:16.030992 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:16.031740 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:16.032434 | orchestrator | 2025-06-01 03:32:16.033109 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-01 03:32:16.033768 | orchestrator | Sunday 01 June 2025 03:32:16 +0000 (0:00:01.103) 0:07:11.502 *********** 2025-06-01 03:32:16.633274 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:17.052205 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:17.052626 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:17.055521 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:17.055555 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:17.056043 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:17.057755 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:17.058754 | orchestrator | 2025-06-01 03:32:17.059310 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-01 03:32:17.060085 | orchestrator | Sunday 01 June 2025 03:32:17 +0000 (0:00:01.022) 0:07:12.525 *********** 2025-06-01 03:32:18.661408 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.662563 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.663678 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.665298 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.666267 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.667946 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.668623 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 03:32:18.669894 | orchestrator | 2025-06-01 03:32:18.670368 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-01 03:32:18.671172 | orchestrator | Sunday 01 June 2025 03:32:18 +0000 (0:00:01.609) 0:07:14.134 *********** 2025-06-01 03:32:19.475042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:32:19.476242 | orchestrator | 2025-06-01 03:32:19.476875 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-01 03:32:19.477793 | orchestrator | Sunday 01 June 2025 03:32:19 +0000 (0:00:00.812) 0:07:14.947 *********** 2025-06-01 03:32:27.704760 | orchestrator | changed: [testbed-manager] 2025-06-01 03:32:27.707074 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:27.708244 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:27.709603 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:27.710527 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:27.711854 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:27.712618 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:27.713598 | orchestrator | 2025-06-01 03:32:27.714294 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-01 03:32:27.715335 | orchestrator | Sunday 01 June 2025 03:32:27 +0000 (0:00:08.229) 0:07:23.177 *********** 2025-06-01 03:32:29.410184 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:29.410344 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:29.410954 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:29.411622 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:29.414690 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:29.415390 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:29.415959 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:29.416422 | orchestrator | 2025-06-01 03:32:29.417121 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-01 03:32:29.417524 | orchestrator | Sunday 01 June 2025 03:32:29 +0000 (0:00:01.706) 0:07:24.883 *********** 2025-06-01 03:32:30.646223 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:30.646395 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:30.646930 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:30.647559 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:30.649011 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:30.650097 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:30.651655 | orchestrator | 2025-06-01 03:32:30.652581 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-01 03:32:30.653361 | orchestrator | Sunday 01 June 2025 03:32:30 +0000 (0:00:01.236) 0:07:26.120 *********** 2025-06-01 03:32:32.058909 | orchestrator | changed: [testbed-manager] 2025-06-01 03:32:32.059091 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:32.059949 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:32.061135 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:32.061418 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:32.063020 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:32.066111 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:32.066585 | orchestrator | 2025-06-01 03:32:32.067206 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-01 03:32:32.067650 | orchestrator | 2025-06-01 03:32:32.070712 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-01 03:32:32.071010 | orchestrator | Sunday 01 June 2025 03:32:32 +0000 (0:00:01.414) 0:07:27.534 *********** 2025-06-01 03:32:32.196269 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:32:32.253873 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:32:32.311506 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:32:32.375287 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:32:32.431269 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:32:32.527955 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:32:32.528574 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:32:32.529658 | orchestrator | 2025-06-01 03:32:32.529899 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-01 03:32:32.530809 | orchestrator | 2025-06-01 03:32:32.533688 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-01 03:32:32.534323 | orchestrator | Sunday 01 June 2025 03:32:32 +0000 (0:00:00.469) 0:07:28.004 *********** 2025-06-01 03:32:33.817193 | orchestrator | changed: [testbed-manager] 2025-06-01 03:32:33.818396 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:33.818566 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:33.819550 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:33.820633 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:33.821397 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:33.821870 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:33.823702 | orchestrator | 2025-06-01 03:32:33.824281 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-01 03:32:33.825071 | orchestrator | Sunday 01 June 2025 03:32:33 +0000 (0:00:01.288) 0:07:29.292 *********** 2025-06-01 03:32:35.189600 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:35.189754 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:35.189772 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:35.189856 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:35.193147 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:35.193178 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:35.193190 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:35.193202 | orchestrator | 2025-06-01 03:32:35.193215 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-01 03:32:35.193228 | orchestrator | Sunday 01 June 2025 03:32:35 +0000 (0:00:01.370) 0:07:30.662 *********** 2025-06-01 03:32:35.485934 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:32:35.544830 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:32:35.613853 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:32:35.675036 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:32:35.732996 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:32:36.122596 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:32:36.123892 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:32:36.125023 | orchestrator | 2025-06-01 03:32:36.127999 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-01 03:32:36.128050 | orchestrator | Sunday 01 June 2025 03:32:36 +0000 (0:00:00.936) 0:07:31.599 *********** 2025-06-01 03:32:37.354836 | orchestrator | changed: [testbed-manager] 2025-06-01 03:32:37.355598 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:37.357405 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:37.358818 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:37.361219 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:37.363532 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:37.363549 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:37.364753 | orchestrator | 2025-06-01 03:32:37.365406 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-01 03:32:37.367636 | orchestrator | 2025-06-01 03:32:37.368740 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-01 03:32:37.369972 | orchestrator | Sunday 01 June 2025 03:32:37 +0000 (0:00:01.230) 0:07:32.830 *********** 2025-06-01 03:32:38.294985 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:32:38.299749 | orchestrator | 2025-06-01 03:32:38.303194 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-01 03:32:38.306997 | orchestrator | Sunday 01 June 2025 03:32:38 +0000 (0:00:00.930) 0:07:33.760 *********** 2025-06-01 03:32:38.702619 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:39.128897 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:39.130058 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:39.130726 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:39.134102 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:39.134140 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:39.134152 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:39.136257 | orchestrator | 2025-06-01 03:32:39.136286 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-01 03:32:39.136377 | orchestrator | Sunday 01 June 2025 03:32:39 +0000 (0:00:00.845) 0:07:34.605 *********** 2025-06-01 03:32:40.252902 | orchestrator | changed: [testbed-manager] 2025-06-01 03:32:40.253100 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:40.253865 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:40.257555 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:40.257580 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:40.257591 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:40.257602 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:40.258140 | orchestrator | 2025-06-01 03:32:40.259158 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-01 03:32:40.260068 | orchestrator | Sunday 01 June 2025 03:32:40 +0000 (0:00:01.122) 0:07:35.728 *********** 2025-06-01 03:32:41.213882 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 03:32:41.214107 | orchestrator | 2025-06-01 03:32:41.214651 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-01 03:32:41.215257 | orchestrator | Sunday 01 June 2025 03:32:41 +0000 (0:00:00.961) 0:07:36.689 *********** 2025-06-01 03:32:41.611338 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:42.022357 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:42.022520 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:42.022906 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:42.023234 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:42.024430 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:42.025289 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:42.026544 | orchestrator | 2025-06-01 03:32:42.027167 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-01 03:32:42.028163 | orchestrator | Sunday 01 June 2025 03:32:42 +0000 (0:00:00.806) 0:07:37.496 *********** 2025-06-01 03:32:42.432219 | orchestrator | changed: [testbed-manager] 2025-06-01 03:32:43.084932 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:32:43.086304 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:32:43.087594 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:32:43.090427 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:32:43.091794 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:32:43.092016 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:32:43.093355 | orchestrator | 2025-06-01 03:32:43.095629 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:32:43.095717 | orchestrator | 2025-06-01 03:32:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:32:43.095735 | orchestrator | 2025-06-01 03:32:43 | INFO  | Please wait and do not abort execution. 2025-06-01 03:32:43.096523 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-01 03:32:43.097588 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 03:32:43.098357 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 03:32:43.098866 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 03:32:43.100046 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-01 03:32:43.100804 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 03:32:43.101648 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 03:32:43.102192 | orchestrator | 2025-06-01 03:32:43.103015 | orchestrator | 2025-06-01 03:32:43.103599 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:32:43.104280 | orchestrator | Sunday 01 June 2025 03:32:43 +0000 (0:00:01.067) 0:07:38.563 *********** 2025-06-01 03:32:43.105086 | orchestrator | =============================================================================== 2025-06-01 03:32:43.105635 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.01s 2025-06-01 03:32:43.105990 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.79s 2025-06-01 03:32:43.107020 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.19s 2025-06-01 03:32:43.107374 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.40s 2025-06-01 03:32:43.108206 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.02s 2025-06-01 03:32:43.108842 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.44s 2025-06-01 03:32:43.109326 | orchestrator | osism.services.docker : Install apt-transport-https package ------------ 11.39s 2025-06-01 03:32:43.109852 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.49s 2025-06-01 03:32:43.110568 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.56s 2025-06-01 03:32:43.110951 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.70s 2025-06-01 03:32:43.111594 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.23s 2025-06-01 03:32:43.111897 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.96s 2025-06-01 03:32:43.112527 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.55s 2025-06-01 03:32:43.112775 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.54s 2025-06-01 03:32:43.113409 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.40s 2025-06-01 03:32:43.113934 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.30s 2025-06-01 03:32:43.114488 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.29s 2025-06-01 03:32:43.114900 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.69s 2025-06-01 03:32:43.115366 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.63s 2025-06-01 03:32:43.115706 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.60s 2025-06-01 03:32:43.751405 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-01 03:32:43.751554 | orchestrator | + osism apply network 2025-06-01 03:32:45.754734 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:32:45.754866 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:32:45.754893 | orchestrator | Registering Redlock._release_script 2025-06-01 03:32:45.814982 | orchestrator | 2025-06-01 03:32:45 | INFO  | Task 66cf41d6-9414-4789-a335-a59821102a16 (network) was prepared for execution. 2025-06-01 03:32:45.815072 | orchestrator | 2025-06-01 03:32:45 | INFO  | It takes a moment until task 66cf41d6-9414-4789-a335-a59821102a16 (network) has been started and output is visible here. 2025-06-01 03:32:49.936991 | orchestrator | 2025-06-01 03:32:49.939996 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-01 03:32:49.940044 | orchestrator | 2025-06-01 03:32:49.940889 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-01 03:32:49.941893 | orchestrator | Sunday 01 June 2025 03:32:49 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-06-01 03:32:50.084902 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:50.159875 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:50.234796 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:50.310748 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:50.489430 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:50.623811 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:50.624335 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:50.626657 | orchestrator | 2025-06-01 03:32:50.626685 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-01 03:32:50.627150 | orchestrator | Sunday 01 June 2025 03:32:50 +0000 (0:00:00.686) 0:00:00.956 *********** 2025-06-01 03:32:51.759323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:32:51.759630 | orchestrator | 2025-06-01 03:32:51.760661 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-01 03:32:51.761817 | orchestrator | Sunday 01 June 2025 03:32:51 +0000 (0:00:01.135) 0:00:02.091 *********** 2025-06-01 03:32:53.715469 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:53.716077 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:53.717125 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:53.717888 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:53.719338 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:53.720076 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:53.720757 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:53.721390 | orchestrator | 2025-06-01 03:32:53.721881 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-01 03:32:53.722750 | orchestrator | Sunday 01 June 2025 03:32:53 +0000 (0:00:01.958) 0:00:04.049 *********** 2025-06-01 03:32:55.386616 | orchestrator | ok: [testbed-manager] 2025-06-01 03:32:55.386967 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:32:55.390752 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:32:55.390778 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:32:55.390790 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:32:55.390802 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:32:55.390911 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:32:55.391390 | orchestrator | 2025-06-01 03:32:55.392157 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-01 03:32:55.392678 | orchestrator | Sunday 01 June 2025 03:32:55 +0000 (0:00:01.667) 0:00:05.717 *********** 2025-06-01 03:32:55.893301 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-01 03:32:56.348590 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-01 03:32:56.348690 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-01 03:32:56.349633 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-01 03:32:56.349776 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-01 03:32:56.349795 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-01 03:32:56.350544 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-01 03:32:56.351031 | orchestrator | 2025-06-01 03:32:56.351716 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-01 03:32:56.351851 | orchestrator | Sunday 01 June 2025 03:32:56 +0000 (0:00:00.967) 0:00:06.685 *********** 2025-06-01 03:32:59.589343 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 03:32:59.590002 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 03:32:59.591108 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 03:32:59.592158 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 03:32:59.592994 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 03:32:59.594075 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 03:32:59.594501 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 03:32:59.596081 | orchestrator | 2025-06-01 03:32:59.596818 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-01 03:32:59.597092 | orchestrator | Sunday 01 June 2025 03:32:59 +0000 (0:00:03.233) 0:00:09.919 *********** 2025-06-01 03:33:01.017876 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:01.018114 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:33:01.019045 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:33:01.020170 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:33:01.021324 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:33:01.022688 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:33:01.022916 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:33:01.023986 | orchestrator | 2025-06-01 03:33:01.024865 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-01 03:33:01.025985 | orchestrator | Sunday 01 June 2025 03:33:01 +0000 (0:00:01.432) 0:00:11.351 *********** 2025-06-01 03:33:03.014364 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 03:33:03.014877 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 03:33:03.015920 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 03:33:03.017905 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 03:33:03.018992 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 03:33:03.020054 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 03:33:03.021004 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 03:33:03.022236 | orchestrator | 2025-06-01 03:33:03.024049 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-01 03:33:03.024073 | orchestrator | Sunday 01 June 2025 03:33:03 +0000 (0:00:01.997) 0:00:13.349 *********** 2025-06-01 03:33:03.419212 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:03.504803 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:33:04.074589 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:33:04.074756 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:33:04.075535 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:33:04.076815 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:33:04.077714 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:33:04.078853 | orchestrator | 2025-06-01 03:33:04.080143 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-01 03:33:04.080716 | orchestrator | Sunday 01 June 2025 03:33:04 +0000 (0:00:01.057) 0:00:14.406 *********** 2025-06-01 03:33:04.234707 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:33:04.316348 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:33:04.403024 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:33:04.480960 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:33:04.559266 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:33:04.706135 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:33:04.706347 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:33:04.708194 | orchestrator | 2025-06-01 03:33:04.709821 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-01 03:33:04.710592 | orchestrator | Sunday 01 June 2025 03:33:04 +0000 (0:00:00.630) 0:00:15.036 *********** 2025-06-01 03:33:06.839280 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:06.841897 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:33:06.842601 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:33:06.846955 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:33:06.848236 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:33:06.850209 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:33:06.850329 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:33:06.850605 | orchestrator | 2025-06-01 03:33:06.851805 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-01 03:33:06.852232 | orchestrator | Sunday 01 June 2025 03:33:06 +0000 (0:00:02.132) 0:00:17.169 *********** 2025-06-01 03:33:07.110843 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:33:07.198364 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:33:07.292006 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:33:07.371824 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:33:07.701053 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:33:07.701228 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:33:07.701726 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-01 03:33:07.702091 | orchestrator | 2025-06-01 03:33:07.702632 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-01 03:33:07.703056 | orchestrator | Sunday 01 June 2025 03:33:07 +0000 (0:00:00.863) 0:00:18.033 *********** 2025-06-01 03:33:09.405782 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:09.405955 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:33:09.406144 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:33:09.406913 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:33:09.407561 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:33:09.407866 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:33:09.408571 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:33:09.408921 | orchestrator | 2025-06-01 03:33:09.409378 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-01 03:33:09.410102 | orchestrator | Sunday 01 June 2025 03:33:09 +0000 (0:00:01.700) 0:00:19.733 *********** 2025-06-01 03:33:10.628676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:33:10.628947 | orchestrator | 2025-06-01 03:33:10.630100 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-01 03:33:10.636514 | orchestrator | Sunday 01 June 2025 03:33:10 +0000 (0:00:01.226) 0:00:20.960 *********** 2025-06-01 03:33:11.578069 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:33:11.580478 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:11.581037 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:33:11.581789 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:33:11.582892 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:33:11.584652 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:33:11.585599 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:33:11.586667 | orchestrator | 2025-06-01 03:33:11.588620 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-01 03:33:11.589202 | orchestrator | Sunday 01 June 2025 03:33:11 +0000 (0:00:00.949) 0:00:21.909 *********** 2025-06-01 03:33:11.905687 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:11.988900 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:33:12.080676 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:33:12.160548 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:33:12.246164 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:33:12.384610 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:33:12.385797 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:33:12.386822 | orchestrator | 2025-06-01 03:33:12.387821 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-01 03:33:12.388658 | orchestrator | Sunday 01 June 2025 03:33:12 +0000 (0:00:00.810) 0:00:22.719 *********** 2025-06-01 03:33:12.731249 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:12.811610 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:12.901609 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:12.901844 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:13.531670 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:13.532779 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:13.535875 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:13.535909 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:13.536122 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:13.538171 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:13.539076 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:13.540727 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:13.541607 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 03:33:13.542887 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 03:33:13.543899 | orchestrator | 2025-06-01 03:33:13.544792 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-01 03:33:13.545903 | orchestrator | Sunday 01 June 2025 03:33:13 +0000 (0:00:01.143) 0:00:23.863 *********** 2025-06-01 03:33:13.704285 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:33:13.789581 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:33:13.871564 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:33:13.956545 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:33:14.043334 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:33:14.177834 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:33:14.178301 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:33:14.180039 | orchestrator | 2025-06-01 03:33:14.181465 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-01 03:33:14.182182 | orchestrator | Sunday 01 June 2025 03:33:14 +0000 (0:00:00.649) 0:00:24.513 *********** 2025-06-01 03:33:17.668863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:33:17.669569 | orchestrator | 2025-06-01 03:33:17.670934 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-01 03:33:17.671551 | orchestrator | Sunday 01 June 2025 03:33:17 +0000 (0:00:03.485) 0:00:27.999 *********** 2025-06-01 03:33:22.634580 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.635709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.638883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.638929 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.638967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.640111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.641194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.642985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.643490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.643924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:22.644548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.645109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.647057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.647478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:22.647844 | orchestrator | 2025-06-01 03:33:22.648603 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-01 03:33:22.649108 | orchestrator | Sunday 01 June 2025 03:33:22 +0000 (0:00:04.965) 0:00:32.964 *********** 2025-06-01 03:33:27.161763 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.161877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.162328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.162741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.163316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.165634 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.166532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.167419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-01 03:33:27.168232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.169053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.169812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.170653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.171259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.171979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-01 03:33:27.172915 | orchestrator | 2025-06-01 03:33:27.173864 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-01 03:33:27.174763 | orchestrator | Sunday 01 June 2025 03:33:27 +0000 (0:00:04.531) 0:00:37.496 *********** 2025-06-01 03:33:28.435057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:33:28.435808 | orchestrator | 2025-06-01 03:33:28.437190 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-01 03:33:28.438203 | orchestrator | Sunday 01 June 2025 03:33:28 +0000 (0:00:01.269) 0:00:38.766 *********** 2025-06-01 03:33:28.895820 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:29.170718 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:33:29.645864 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:33:29.646512 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:33:29.647672 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:33:29.648582 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:33:29.649985 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:33:29.650665 | orchestrator | 2025-06-01 03:33:29.651573 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-01 03:33:29.651862 | orchestrator | Sunday 01 June 2025 03:33:29 +0000 (0:00:01.213) 0:00:39.979 *********** 2025-06-01 03:33:29.735417 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:29.735745 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:29.737137 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:29.832528 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:29.833386 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:29.834766 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:29.836480 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:29.837342 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:29.934485 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:33:29.935002 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:29.936998 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:29.939198 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:29.939910 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:30.028389 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:33:30.029560 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:30.031473 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:30.031947 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:30.033145 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:30.123879 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:33:30.125347 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:30.126662 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:30.127611 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:30.128603 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:30.212772 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:33:30.212945 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:30.215191 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:30.217657 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:30.218598 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:31.630117 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:33:31.630284 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:33:31.631905 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 03:33:31.635078 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 03:33:31.635129 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 03:33:31.635185 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 03:33:31.635902 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:33:31.637490 | orchestrator | 2025-06-01 03:33:31.638231 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-01 03:33:31.639118 | orchestrator | Sunday 01 June 2025 03:33:31 +0000 (0:00:01.982) 0:00:41.961 *********** 2025-06-01 03:33:31.794619 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:33:31.879181 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:33:31.970322 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:33:32.055218 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:33:32.135316 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:33:32.255532 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:33:32.256462 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:33:32.257272 | orchestrator | 2025-06-01 03:33:32.258134 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-01 03:33:32.258918 | orchestrator | Sunday 01 June 2025 03:33:32 +0000 (0:00:00.628) 0:00:42.590 *********** 2025-06-01 03:33:32.417841 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:33:32.500910 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:33:32.751572 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:33:32.834778 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:33:32.922849 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:33:32.963160 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:33:32.963569 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:33:32.963872 | orchestrator | 2025-06-01 03:33:32.964991 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:33:32.965393 | orchestrator | 2025-06-01 03:33:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:33:32.965491 | orchestrator | 2025-06-01 03:33:32 | INFO  | Please wait and do not abort execution. 2025-06-01 03:33:32.966399 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 03:33:32.966817 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 03:33:32.967353 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 03:33:32.967881 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 03:33:32.968386 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 03:33:32.969157 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 03:33:32.969939 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 03:33:32.971583 | orchestrator | 2025-06-01 03:33:32.971991 | orchestrator | 2025-06-01 03:33:32.972888 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:33:32.973925 | orchestrator | Sunday 01 June 2025 03:33:32 +0000 (0:00:00.706) 0:00:43.296 *********** 2025-06-01 03:33:32.974299 | orchestrator | =============================================================================== 2025-06-01 03:33:32.975211 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.97s 2025-06-01 03:33:32.975652 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.53s 2025-06-01 03:33:32.976099 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.49s 2025-06-01 03:33:32.976645 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.23s 2025-06-01 03:33:32.977287 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-06-01 03:33:32.977628 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.00s 2025-06-01 03:33:32.977903 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.98s 2025-06-01 03:33:32.978386 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2025-06-01 03:33:32.978947 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.70s 2025-06-01 03:33:32.979255 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.67s 2025-06-01 03:33:32.979782 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.43s 2025-06-01 03:33:32.980034 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2025-06-01 03:33:32.980639 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2025-06-01 03:33:32.980945 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2025-06-01 03:33:32.981384 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.14s 2025-06-01 03:33:32.982160 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.14s 2025-06-01 03:33:32.983014 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.06s 2025-06-01 03:33:32.984189 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-06-01 03:33:32.985016 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-06-01 03:33:32.986108 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.86s 2025-06-01 03:33:33.572364 | orchestrator | + osism apply wireguard 2025-06-01 03:33:35.199223 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:33:35.199322 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:33:35.199338 | orchestrator | Registering Redlock._release_script 2025-06-01 03:33:35.257869 | orchestrator | 2025-06-01 03:33:35 | INFO  | Task 57d881d9-3da8-445d-afc7-d86293394238 (wireguard) was prepared for execution. 2025-06-01 03:33:35.257951 | orchestrator | 2025-06-01 03:33:35 | INFO  | It takes a moment until task 57d881d9-3da8-445d-afc7-d86293394238 (wireguard) has been started and output is visible here. 2025-06-01 03:33:39.313658 | orchestrator | 2025-06-01 03:33:39.318206 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-01 03:33:39.320512 | orchestrator | 2025-06-01 03:33:39.320673 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-01 03:33:39.322304 | orchestrator | Sunday 01 June 2025 03:33:39 +0000 (0:00:00.235) 0:00:00.235 *********** 2025-06-01 03:33:40.966269 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:40.966394 | orchestrator | 2025-06-01 03:33:40.966485 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-01 03:33:40.967041 | orchestrator | Sunday 01 June 2025 03:33:40 +0000 (0:00:01.658) 0:00:01.894 *********** 2025-06-01 03:33:48.708225 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:48.708372 | orchestrator | 2025-06-01 03:33:48.708933 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-01 03:33:48.709649 | orchestrator | Sunday 01 June 2025 03:33:48 +0000 (0:00:07.741) 0:00:09.635 *********** 2025-06-01 03:33:49.360581 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:49.360684 | orchestrator | 2025-06-01 03:33:49.361766 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-01 03:33:49.363723 | orchestrator | Sunday 01 June 2025 03:33:49 +0000 (0:00:00.654) 0:00:10.290 *********** 2025-06-01 03:33:49.809173 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:49.809837 | orchestrator | 2025-06-01 03:33:49.810592 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-01 03:33:49.810847 | orchestrator | Sunday 01 June 2025 03:33:49 +0000 (0:00:00.447) 0:00:10.737 *********** 2025-06-01 03:33:50.380741 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:50.382250 | orchestrator | 2025-06-01 03:33:50.382488 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-01 03:33:50.382584 | orchestrator | Sunday 01 June 2025 03:33:50 +0000 (0:00:00.571) 0:00:11.309 *********** 2025-06-01 03:33:50.977334 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:50.977463 | orchestrator | 2025-06-01 03:33:50.978835 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-01 03:33:50.979504 | orchestrator | Sunday 01 June 2025 03:33:50 +0000 (0:00:00.596) 0:00:11.906 *********** 2025-06-01 03:33:51.428680 | orchestrator | ok: [testbed-manager] 2025-06-01 03:33:51.428776 | orchestrator | 2025-06-01 03:33:51.428791 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-01 03:33:51.428857 | orchestrator | Sunday 01 June 2025 03:33:51 +0000 (0:00:00.449) 0:00:12.355 *********** 2025-06-01 03:33:52.690080 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:52.691878 | orchestrator | 2025-06-01 03:33:52.692925 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-01 03:33:52.695572 | orchestrator | Sunday 01 June 2025 03:33:52 +0000 (0:00:01.265) 0:00:13.620 *********** 2025-06-01 03:33:53.662589 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 03:33:53.663091 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:53.663842 | orchestrator | 2025-06-01 03:33:53.665374 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-01 03:33:53.665454 | orchestrator | Sunday 01 June 2025 03:33:53 +0000 (0:00:00.972) 0:00:14.593 *********** 2025-06-01 03:33:55.331360 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:55.332200 | orchestrator | 2025-06-01 03:33:55.332693 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-01 03:33:55.335886 | orchestrator | Sunday 01 June 2025 03:33:55 +0000 (0:00:01.667) 0:00:16.260 *********** 2025-06-01 03:33:56.197242 | orchestrator | changed: [testbed-manager] 2025-06-01 03:33:56.197352 | orchestrator | 2025-06-01 03:33:56.197371 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:33:56.197528 | orchestrator | 2025-06-01 03:33:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:33:56.198004 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:33:56.198107 | orchestrator | 2025-06-01 03:33:56 | INFO  | Please wait and do not abort execution. 2025-06-01 03:33:56.198482 | orchestrator | 2025-06-01 03:33:56.198852 | orchestrator | 2025-06-01 03:33:56.199318 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:33:56.199703 | orchestrator | Sunday 01 June 2025 03:33:56 +0000 (0:00:00.864) 0:00:17.125 *********** 2025-06-01 03:33:56.200028 | orchestrator | =============================================================================== 2025-06-01 03:33:56.200396 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.74s 2025-06-01 03:33:56.200756 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2025-06-01 03:33:56.201073 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-06-01 03:33:56.201892 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2025-06-01 03:33:56.202073 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-06-01 03:33:56.203149 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.86s 2025-06-01 03:33:56.203198 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.65s 2025-06-01 03:33:56.203323 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.60s 2025-06-01 03:33:56.203945 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.57s 2025-06-01 03:33:56.204290 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-06-01 03:33:56.204699 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-06-01 03:33:56.782881 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-01 03:33:56.811277 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-01 03:33:56.811366 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-01 03:33:56.895693 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 178 0 --:--:-- --:--:-- --:--:-- 180 2025-06-01 03:33:56.909456 | orchestrator | + osism apply --environment custom workarounds 2025-06-01 03:33:58.611309 | orchestrator | 2025-06-01 03:33:58 | INFO  | Trying to run play workarounds in environment custom 2025-06-01 03:33:58.616217 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:33:58.616263 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:33:58.616276 | orchestrator | Registering Redlock._release_script 2025-06-01 03:33:58.674869 | orchestrator | 2025-06-01 03:33:58 | INFO  | Task 6e461973-ba4c-4019-ae0d-25d6d370c714 (workarounds) was prepared for execution. 2025-06-01 03:33:58.674954 | orchestrator | 2025-06-01 03:33:58 | INFO  | It takes a moment until task 6e461973-ba4c-4019-ae0d-25d6d370c714 (workarounds) has been started and output is visible here. 2025-06-01 03:34:02.524866 | orchestrator | 2025-06-01 03:34:02.527590 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 03:34:02.528771 | orchestrator | 2025-06-01 03:34:02.529225 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-01 03:34:02.530101 | orchestrator | Sunday 01 June 2025 03:34:02 +0000 (0:00:00.110) 0:00:00.110 *********** 2025-06-01 03:34:02.658888 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-01 03:34:02.724038 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-01 03:34:02.786677 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-01 03:34:02.856662 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-01 03:34:02.990247 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-01 03:34:03.118502 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-01 03:34:03.118646 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-01 03:34:03.119682 | orchestrator | 2025-06-01 03:34:03.120533 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-01 03:34:03.121126 | orchestrator | 2025-06-01 03:34:03.121910 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-01 03:34:03.122872 | orchestrator | Sunday 01 June 2025 03:34:03 +0000 (0:00:00.594) 0:00:00.705 *********** 2025-06-01 03:34:05.263226 | orchestrator | ok: [testbed-manager] 2025-06-01 03:34:05.263652 | orchestrator | 2025-06-01 03:34:05.263743 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-01 03:34:05.264716 | orchestrator | 2025-06-01 03:34:05.265976 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-01 03:34:05.266778 | orchestrator | Sunday 01 June 2025 03:34:05 +0000 (0:00:02.141) 0:00:02.847 *********** 2025-06-01 03:34:07.065643 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:34:07.065749 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:34:07.066564 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:34:07.066626 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:34:07.067678 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:34:07.068552 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:34:07.069648 | orchestrator | 2025-06-01 03:34:07.070613 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-01 03:34:07.071096 | orchestrator | 2025-06-01 03:34:07.071860 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-01 03:34:07.072586 | orchestrator | Sunday 01 June 2025 03:34:07 +0000 (0:00:01.797) 0:00:04.645 *********** 2025-06-01 03:34:08.530315 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 03:34:08.530464 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 03:34:08.531788 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 03:34:08.531810 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 03:34:08.532565 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 03:34:08.533616 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 03:34:08.536632 | orchestrator | 2025-06-01 03:34:08.538519 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-01 03:34:08.538694 | orchestrator | Sunday 01 June 2025 03:34:08 +0000 (0:00:01.464) 0:00:06.109 *********** 2025-06-01 03:34:12.290405 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:34:12.290575 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:34:12.291504 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:34:12.292851 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:34:12.294856 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:34:12.295872 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:34:12.296723 | orchestrator | 2025-06-01 03:34:12.297829 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-01 03:34:12.298657 | orchestrator | Sunday 01 June 2025 03:34:12 +0000 (0:00:03.765) 0:00:09.874 *********** 2025-06-01 03:34:12.444958 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:34:12.526098 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:34:12.604525 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:34:12.681072 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:34:12.976675 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:34:12.977059 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:34:12.981374 | orchestrator | 2025-06-01 03:34:12.982268 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-01 03:34:12.983189 | orchestrator | 2025-06-01 03:34:12.984185 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-01 03:34:12.986750 | orchestrator | Sunday 01 June 2025 03:34:12 +0000 (0:00:00.685) 0:00:10.560 *********** 2025-06-01 03:34:14.696196 | orchestrator | changed: [testbed-manager] 2025-06-01 03:34:14.696583 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:34:14.700723 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:34:14.700765 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:34:14.700777 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:34:14.700788 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:34:14.700799 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:34:14.702504 | orchestrator | 2025-06-01 03:34:14.704010 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-01 03:34:14.704692 | orchestrator | Sunday 01 June 2025 03:34:14 +0000 (0:00:01.721) 0:00:12.281 *********** 2025-06-01 03:34:16.300525 | orchestrator | changed: [testbed-manager] 2025-06-01 03:34:16.302104 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:34:16.303320 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:34:16.304470 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:34:16.305891 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:34:16.306311 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:34:16.306883 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:34:16.307295 | orchestrator | 2025-06-01 03:34:16.307660 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-01 03:34:16.308395 | orchestrator | Sunday 01 June 2025 03:34:16 +0000 (0:00:01.599) 0:00:13.881 *********** 2025-06-01 03:34:17.729917 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:34:17.732151 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:34:17.733006 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:34:17.734456 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:34:17.736798 | orchestrator | ok: [testbed-manager] 2025-06-01 03:34:17.737627 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:34:17.738830 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:34:17.740498 | orchestrator | 2025-06-01 03:34:17.741769 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-01 03:34:17.742388 | orchestrator | Sunday 01 June 2025 03:34:17 +0000 (0:00:01.432) 0:00:15.314 *********** 2025-06-01 03:34:19.449462 | orchestrator | changed: [testbed-manager] 2025-06-01 03:34:19.450965 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:34:19.452640 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:34:19.454242 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:34:19.455510 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:34:19.456736 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:34:19.457478 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:34:19.458317 | orchestrator | 2025-06-01 03:34:19.459092 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-01 03:34:19.459597 | orchestrator | Sunday 01 June 2025 03:34:19 +0000 (0:00:01.715) 0:00:17.030 *********** 2025-06-01 03:34:19.616064 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:34:19.704055 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:34:19.782447 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:34:19.862881 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:34:19.936785 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:34:20.089487 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:34:20.091259 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:34:20.093201 | orchestrator | 2025-06-01 03:34:20.094873 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-01 03:34:20.095686 | orchestrator | 2025-06-01 03:34:20.097069 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-01 03:34:20.098179 | orchestrator | Sunday 01 June 2025 03:34:20 +0000 (0:00:00.641) 0:00:17.671 *********** 2025-06-01 03:34:23.455980 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:34:23.456772 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:34:23.457131 | orchestrator | ok: [testbed-manager] 2025-06-01 03:34:23.458700 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:34:23.460178 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:34:23.461749 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:34:23.464509 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:34:23.465379 | orchestrator | 2025-06-01 03:34:23.466127 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:34:23.466574 | orchestrator | 2025-06-01 03:34:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:34:23.466970 | orchestrator | 2025-06-01 03:34:23 | INFO  | Please wait and do not abort execution. 2025-06-01 03:34:23.468286 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:34:23.468774 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:23.469741 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:23.469965 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:23.470477 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:23.470981 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:23.471400 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:23.471996 | orchestrator | 2025-06-01 03:34:23.472314 | orchestrator | 2025-06-01 03:34:23.472674 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:34:23.473202 | orchestrator | Sunday 01 June 2025 03:34:23 +0000 (0:00:03.367) 0:00:21.039 *********** 2025-06-01 03:34:23.473513 | orchestrator | =============================================================================== 2025-06-01 03:34:23.473944 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-06-01 03:34:23.474430 | orchestrator | Install python3-docker -------------------------------------------------- 3.37s 2025-06-01 03:34:23.474717 | orchestrator | Apply netplan configuration --------------------------------------------- 2.14s 2025-06-01 03:34:23.475113 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-06-01 03:34:23.475541 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-06-01 03:34:23.475874 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-06-01 03:34:23.476479 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2025-06-01 03:34:23.476812 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-06-01 03:34:23.477183 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.43s 2025-06-01 03:34:23.477580 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-06-01 03:34:23.477981 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-06-01 03:34:23.478319 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.59s 2025-06-01 03:34:24.060674 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-01 03:34:25.696379 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:34:25.696542 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:34:25.696558 | orchestrator | Registering Redlock._release_script 2025-06-01 03:34:25.754598 | orchestrator | 2025-06-01 03:34:25 | INFO  | Task ec8b6de0-dab0-4a6c-a0b4-f88251c5dc57 (reboot) was prepared for execution. 2025-06-01 03:34:25.754703 | orchestrator | 2025-06-01 03:34:25 | INFO  | It takes a moment until task ec8b6de0-dab0-4a6c-a0b4-f88251c5dc57 (reboot) has been started and output is visible here. 2025-06-01 03:34:29.749811 | orchestrator | 2025-06-01 03:34:29.749928 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 03:34:29.750471 | orchestrator | 2025-06-01 03:34:29.751260 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 03:34:29.751986 | orchestrator | Sunday 01 June 2025 03:34:29 +0000 (0:00:00.208) 0:00:00.208 *********** 2025-06-01 03:34:29.844944 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:34:29.845036 | orchestrator | 2025-06-01 03:34:29.845051 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 03:34:29.845180 | orchestrator | Sunday 01 June 2025 03:34:29 +0000 (0:00:00.097) 0:00:00.306 *********** 2025-06-01 03:34:30.741639 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:34:30.743024 | orchestrator | 2025-06-01 03:34:30.744053 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 03:34:30.744800 | orchestrator | Sunday 01 June 2025 03:34:30 +0000 (0:00:00.898) 0:00:01.205 *********** 2025-06-01 03:34:30.858733 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:34:30.859725 | orchestrator | 2025-06-01 03:34:30.860491 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 03:34:30.861713 | orchestrator | 2025-06-01 03:34:30.862304 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 03:34:30.863479 | orchestrator | Sunday 01 June 2025 03:34:30 +0000 (0:00:00.118) 0:00:01.324 *********** 2025-06-01 03:34:30.966502 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:34:30.966713 | orchestrator | 2025-06-01 03:34:30.968419 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 03:34:30.969093 | orchestrator | Sunday 01 June 2025 03:34:30 +0000 (0:00:00.106) 0:00:01.430 *********** 2025-06-01 03:34:31.638255 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:34:31.639567 | orchestrator | 2025-06-01 03:34:31.641927 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 03:34:31.642462 | orchestrator | Sunday 01 June 2025 03:34:31 +0000 (0:00:00.670) 0:00:02.101 *********** 2025-06-01 03:34:31.761707 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:34:31.762844 | orchestrator | 2025-06-01 03:34:31.763159 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 03:34:31.764133 | orchestrator | 2025-06-01 03:34:31.765563 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 03:34:31.766322 | orchestrator | Sunday 01 June 2025 03:34:31 +0000 (0:00:00.122) 0:00:02.223 *********** 2025-06-01 03:34:31.986658 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:34:31.987271 | orchestrator | 2025-06-01 03:34:31.988328 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 03:34:31.989367 | orchestrator | Sunday 01 June 2025 03:34:31 +0000 (0:00:00.227) 0:00:02.451 *********** 2025-06-01 03:34:32.637913 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:34:32.638105 | orchestrator | 2025-06-01 03:34:32.639057 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 03:34:32.639254 | orchestrator | Sunday 01 June 2025 03:34:32 +0000 (0:00:00.648) 0:00:03.099 *********** 2025-06-01 03:34:32.764774 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:34:32.764868 | orchestrator | 2025-06-01 03:34:32.764881 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 03:34:32.764894 | orchestrator | 2025-06-01 03:34:32.765674 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 03:34:32.766979 | orchestrator | Sunday 01 June 2025 03:34:32 +0000 (0:00:00.125) 0:00:03.225 *********** 2025-06-01 03:34:32.863327 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:34:32.864010 | orchestrator | 2025-06-01 03:34:32.864863 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 03:34:32.865695 | orchestrator | Sunday 01 June 2025 03:34:32 +0000 (0:00:00.102) 0:00:03.328 *********** 2025-06-01 03:34:33.496863 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:34:33.497608 | orchestrator | 2025-06-01 03:34:33.498073 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 03:34:33.498771 | orchestrator | Sunday 01 June 2025 03:34:33 +0000 (0:00:00.632) 0:00:03.961 *********** 2025-06-01 03:34:33.612871 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:34:33.615805 | orchestrator | 2025-06-01 03:34:33.615833 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 03:34:33.615845 | orchestrator | 2025-06-01 03:34:33.615857 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 03:34:33.616198 | orchestrator | Sunday 01 June 2025 03:34:33 +0000 (0:00:00.113) 0:00:04.075 *********** 2025-06-01 03:34:33.727776 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:34:33.728700 | orchestrator | 2025-06-01 03:34:33.729771 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 03:34:33.732000 | orchestrator | Sunday 01 June 2025 03:34:33 +0000 (0:00:00.117) 0:00:04.192 *********** 2025-06-01 03:34:34.363699 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:34:34.364974 | orchestrator | 2025-06-01 03:34:34.366152 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 03:34:34.367046 | orchestrator | Sunday 01 June 2025 03:34:34 +0000 (0:00:00.633) 0:00:04.825 *********** 2025-06-01 03:34:34.472130 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:34:34.472460 | orchestrator | 2025-06-01 03:34:34.473276 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 03:34:34.474215 | orchestrator | 2025-06-01 03:34:34.476375 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 03:34:34.476430 | orchestrator | Sunday 01 June 2025 03:34:34 +0000 (0:00:00.108) 0:00:04.934 *********** 2025-06-01 03:34:34.575049 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:34:34.575457 | orchestrator | 2025-06-01 03:34:34.576534 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 03:34:34.577068 | orchestrator | Sunday 01 June 2025 03:34:34 +0000 (0:00:00.105) 0:00:05.039 *********** 2025-06-01 03:34:35.266453 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:34:35.266555 | orchestrator | 2025-06-01 03:34:35.266737 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 03:34:35.267058 | orchestrator | Sunday 01 June 2025 03:34:35 +0000 (0:00:00.690) 0:00:05.730 *********** 2025-06-01 03:34:35.298260 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:34:35.298885 | orchestrator | 2025-06-01 03:34:35.299007 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:34:35.299561 | orchestrator | 2025-06-01 03:34:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:34:35.300070 | orchestrator | 2025-06-01 03:34:35 | INFO  | Please wait and do not abort execution. 2025-06-01 03:34:35.300863 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:35.302554 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:35.302589 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:35.302657 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:35.303145 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:35.303611 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:34:35.303910 | orchestrator | 2025-06-01 03:34:35.304419 | orchestrator | 2025-06-01 03:34:35.305033 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:34:35.305273 | orchestrator | Sunday 01 June 2025 03:34:35 +0000 (0:00:00.034) 0:00:05.764 *********** 2025-06-01 03:34:35.305830 | orchestrator | =============================================================================== 2025-06-01 03:34:35.306001 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.17s 2025-06-01 03:34:35.306540 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-06-01 03:34:35.306868 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-06-01 03:34:35.878708 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-01 03:34:37.614860 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:34:37.614968 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:34:37.614985 | orchestrator | Registering Redlock._release_script 2025-06-01 03:34:37.676056 | orchestrator | 2025-06-01 03:34:37 | INFO  | Task 1086182b-8ebd-46d5-8b8b-cc7f00eaa070 (wait-for-connection) was prepared for execution. 2025-06-01 03:34:37.676159 | orchestrator | 2025-06-01 03:34:37 | INFO  | It takes a moment until task 1086182b-8ebd-46d5-8b8b-cc7f00eaa070 (wait-for-connection) has been started and output is visible here. 2025-06-01 03:34:41.892334 | orchestrator | 2025-06-01 03:34:41.892561 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-01 03:34:41.898344 | orchestrator | 2025-06-01 03:34:41.898906 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-01 03:34:41.899681 | orchestrator | Sunday 01 June 2025 03:34:41 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-06-01 03:34:53.706090 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:34:53.706222 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:34:53.706239 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:34:53.706250 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:34:53.706279 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:34:53.706291 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:34:53.706302 | orchestrator | 2025-06-01 03:34:53.706475 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:34:53.706530 | orchestrator | 2025-06-01 03:34:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:34:53.706555 | orchestrator | 2025-06-01 03:34:53 | INFO  | Please wait and do not abort execution. 2025-06-01 03:34:53.707317 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:34:53.707652 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:34:53.711280 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:34:53.711349 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:34:53.711365 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:34:53.711376 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:34:53.711388 | orchestrator | 2025-06-01 03:34:53.711495 | orchestrator | 2025-06-01 03:34:53.711989 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:34:53.712465 | orchestrator | Sunday 01 June 2025 03:34:53 +0000 (0:00:11.810) 0:00:12.041 *********** 2025-06-01 03:34:53.714514 | orchestrator | =============================================================================== 2025-06-01 03:34:53.714935 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.81s 2025-06-01 03:34:54.297176 | orchestrator | + osism apply hddtemp 2025-06-01 03:34:56.117279 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:34:56.117468 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:34:56.117502 | orchestrator | Registering Redlock._release_script 2025-06-01 03:34:56.178898 | orchestrator | 2025-06-01 03:34:56 | INFO  | Task def5daf6-97d7-4138-8606-c96dae743873 (hddtemp) was prepared for execution. 2025-06-01 03:34:56.178989 | orchestrator | 2025-06-01 03:34:56 | INFO  | It takes a moment until task def5daf6-97d7-4138-8606-c96dae743873 (hddtemp) has been started and output is visible here. 2025-06-01 03:35:00.238137 | orchestrator | 2025-06-01 03:35:00.243990 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-01 03:35:00.244702 | orchestrator | 2025-06-01 03:35:00.245430 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-01 03:35:00.246406 | orchestrator | Sunday 01 June 2025 03:35:00 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-01 03:35:00.389958 | orchestrator | ok: [testbed-manager] 2025-06-01 03:35:00.468283 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:35:00.544729 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:35:00.622567 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:35:00.819177 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:35:00.947378 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:35:00.947901 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:35:00.948085 | orchestrator | 2025-06-01 03:35:00.953997 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-01 03:35:00.954104 | orchestrator | Sunday 01 June 2025 03:35:00 +0000 (0:00:00.708) 0:00:00.968 *********** 2025-06-01 03:35:02.131831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:35:02.132008 | orchestrator | 2025-06-01 03:35:02.132501 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-01 03:35:02.133761 | orchestrator | Sunday 01 June 2025 03:35:02 +0000 (0:00:01.184) 0:00:02.152 *********** 2025-06-01 03:35:04.177352 | orchestrator | ok: [testbed-manager] 2025-06-01 03:35:04.179600 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:35:04.179661 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:35:04.181144 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:35:04.181953 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:35:04.182720 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:35:04.183960 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:35:04.184816 | orchestrator | 2025-06-01 03:35:04.185509 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-01 03:35:04.186641 | orchestrator | Sunday 01 June 2025 03:35:04 +0000 (0:00:02.046) 0:00:04.198 *********** 2025-06-01 03:35:04.787962 | orchestrator | changed: [testbed-manager] 2025-06-01 03:35:04.878189 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:35:05.349271 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:35:05.349375 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:35:05.350130 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:35:05.350612 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:35:05.350867 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:35:05.351591 | orchestrator | 2025-06-01 03:35:05.352079 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-01 03:35:05.352467 | orchestrator | Sunday 01 June 2025 03:35:05 +0000 (0:00:01.170) 0:00:05.369 *********** 2025-06-01 03:35:07.131285 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:35:07.132211 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:35:07.132826 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:35:07.134003 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:35:07.134492 | orchestrator | ok: [testbed-manager] 2025-06-01 03:35:07.135441 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:35:07.136257 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:35:07.137065 | orchestrator | 2025-06-01 03:35:07.137885 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-01 03:35:07.138693 | orchestrator | Sunday 01 June 2025 03:35:07 +0000 (0:00:01.785) 0:00:07.154 *********** 2025-06-01 03:35:07.593565 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:35:07.670898 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:35:07.747434 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:35:07.830752 | orchestrator | changed: [testbed-manager] 2025-06-01 03:35:07.960770 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:35:07.961336 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:35:07.962118 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:35:07.964987 | orchestrator | 2025-06-01 03:35:07.965036 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-01 03:35:07.965050 | orchestrator | Sunday 01 June 2025 03:35:07 +0000 (0:00:00.827) 0:00:07.981 *********** 2025-06-01 03:35:19.907955 | orchestrator | changed: [testbed-manager] 2025-06-01 03:35:19.908074 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:35:19.908091 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:35:19.908103 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:35:19.908114 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:35:19.908289 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:35:19.908308 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:35:19.908321 | orchestrator | 2025-06-01 03:35:19.908333 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-01 03:35:19.912919 | orchestrator | Sunday 01 June 2025 03:35:19 +0000 (0:00:11.936) 0:00:19.918 *********** 2025-06-01 03:35:21.300771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 03:35:21.301069 | orchestrator | 2025-06-01 03:35:21.301825 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-01 03:35:21.305428 | orchestrator | Sunday 01 June 2025 03:35:21 +0000 (0:00:01.403) 0:00:21.321 *********** 2025-06-01 03:35:23.230116 | orchestrator | changed: [testbed-manager] 2025-06-01 03:35:23.230500 | orchestrator | changed: [testbed-node-0] 2025-06-01 03:35:23.231096 | orchestrator | changed: [testbed-node-1] 2025-06-01 03:35:23.231164 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:35:23.231538 | orchestrator | changed: [testbed-node-2] 2025-06-01 03:35:23.232557 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:35:23.234231 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:35:23.234544 | orchestrator | 2025-06-01 03:35:23.235239 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:35:23.235549 | orchestrator | 2025-06-01 03:35:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:35:23.235621 | orchestrator | 2025-06-01 03:35:23 | INFO  | Please wait and do not abort execution. 2025-06-01 03:35:23.236626 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:35:23.237479 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:23.237883 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:23.238530 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:23.239302 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:23.239614 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:23.240108 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:23.240582 | orchestrator | 2025-06-01 03:35:23.241265 | orchestrator | 2025-06-01 03:35:23.241710 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:35:23.242157 | orchestrator | Sunday 01 June 2025 03:35:23 +0000 (0:00:01.932) 0:00:23.253 *********** 2025-06-01 03:35:23.242856 | orchestrator | =============================================================================== 2025-06-01 03:35:23.243787 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.94s 2025-06-01 03:35:23.243809 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.05s 2025-06-01 03:35:23.243985 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2025-06-01 03:35:23.244352 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.79s 2025-06-01 03:35:23.244896 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-06-01 03:35:23.245233 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-06-01 03:35:23.246113 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.17s 2025-06-01 03:35:23.246713 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-06-01 03:35:23.247046 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-06-01 03:35:23.909456 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-06-01 03:35:25.352039 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 03:35:25.352152 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-01 03:35:25.352169 | orchestrator | + local max_attempts=60 2025-06-01 03:35:25.352182 | orchestrator | + local name=ceph-ansible 2025-06-01 03:35:25.352195 | orchestrator | + local attempt_num=1 2025-06-01 03:35:25.352989 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-01 03:35:25.390951 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 03:35:25.391024 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-01 03:35:25.391039 | orchestrator | + local max_attempts=60 2025-06-01 03:35:25.391053 | orchestrator | + local name=kolla-ansible 2025-06-01 03:35:25.391065 | orchestrator | + local attempt_num=1 2025-06-01 03:35:25.391764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-01 03:35:25.439913 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 03:35:25.439984 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-01 03:35:25.439999 | orchestrator | + local max_attempts=60 2025-06-01 03:35:25.440011 | orchestrator | + local name=osism-ansible 2025-06-01 03:35:25.440022 | orchestrator | + local attempt_num=1 2025-06-01 03:35:25.441535 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-01 03:35:25.486927 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 03:35:25.487009 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-01 03:35:25.487024 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-01 03:35:25.653304 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-01 03:35:25.814817 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-01 03:35:25.996421 | orchestrator | ARA in osism-ansible already disabled. 2025-06-01 03:35:26.150963 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-01 03:35:26.151662 | orchestrator | + osism apply gather-facts 2025-06-01 03:35:27.869487 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:35:27.869586 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:35:27.869601 | orchestrator | Registering Redlock._release_script 2025-06-01 03:35:27.929846 | orchestrator | 2025-06-01 03:35:27 | INFO  | Task 983dc963-13a1-4fe5-80e1-9fb65246b804 (gather-facts) was prepared for execution. 2025-06-01 03:35:27.929928 | orchestrator | 2025-06-01 03:35:27 | INFO  | It takes a moment until task 983dc963-13a1-4fe5-80e1-9fb65246b804 (gather-facts) has been started and output is visible here. 2025-06-01 03:35:32.026227 | orchestrator | 2025-06-01 03:35:32.029021 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 03:35:32.029915 | orchestrator | 2025-06-01 03:35:32.032145 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 03:35:32.033794 | orchestrator | Sunday 01 June 2025 03:35:32 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-06-01 03:35:37.258738 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:35:37.259493 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:35:37.264242 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:35:37.264667 | orchestrator | ok: [testbed-manager] 2025-06-01 03:35:37.265803 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:35:37.266814 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:35:37.267263 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:35:37.268098 | orchestrator | 2025-06-01 03:35:37.268598 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 03:35:37.269139 | orchestrator | 2025-06-01 03:35:37.269780 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 03:35:37.270210 | orchestrator | Sunday 01 June 2025 03:35:37 +0000 (0:00:05.236) 0:00:05.452 *********** 2025-06-01 03:35:37.434925 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:35:37.513484 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:35:37.587647 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:35:37.665268 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:35:37.742008 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:35:37.789102 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:35:37.789263 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:35:37.790445 | orchestrator | 2025-06-01 03:35:37.791750 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:35:37.792752 | orchestrator | 2025-06-01 03:35:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:35:37.792986 | orchestrator | 2025-06-01 03:35:37 | INFO  | Please wait and do not abort execution. 2025-06-01 03:35:37.794709 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.795559 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.796684 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.797297 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.798172 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.798814 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.799323 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 03:35:37.799813 | orchestrator | 2025-06-01 03:35:37.800175 | orchestrator | 2025-06-01 03:35:37.800888 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:35:37.801284 | orchestrator | Sunday 01 June 2025 03:35:37 +0000 (0:00:00.531) 0:00:05.983 *********** 2025-06-01 03:35:37.802175 | orchestrator | =============================================================================== 2025-06-01 03:35:37.802361 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.24s 2025-06-01 03:35:37.802851 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-01 03:35:38.440443 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-01 03:35:38.459710 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-01 03:35:38.472787 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-01 03:35:38.487908 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-01 03:35:38.498533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-01 03:35:38.514678 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-01 03:35:38.526580 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-01 03:35:38.544110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-01 03:35:38.559252 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-01 03:35:38.578756 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-01 03:35:38.591620 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-01 03:35:38.607278 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-01 03:35:38.620516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-01 03:35:38.632474 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-01 03:35:38.645746 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-01 03:35:38.657706 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-01 03:35:38.672197 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-01 03:35:38.684829 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-01 03:35:38.696470 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-01 03:35:38.714667 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-01 03:35:38.733618 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-01 03:35:39.051737 | orchestrator | ok: Runtime: 0:18:36.850271 2025-06-01 03:35:39.175727 | 2025-06-01 03:35:39.175882 | TASK [Deploy services] 2025-06-01 03:35:39.709154 | orchestrator | skipping: Conditional result was False 2025-06-01 03:35:39.728108 | 2025-06-01 03:35:39.728280 | TASK [Deploy in a nutshell] 2025-06-01 03:35:40.463609 | orchestrator | + set -e 2025-06-01 03:35:40.463743 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 03:35:40.463756 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 03:35:40.463767 | orchestrator | ++ INTERACTIVE=false 2025-06-01 03:35:40.463773 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 03:35:40.463861 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 03:35:40.463871 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 03:35:40.463894 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 03:35:40.463908 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 03:35:40.463914 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 03:35:40.463921 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 03:35:40.463926 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 03:35:40.463934 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 03:35:40.463939 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 03:35:40.463949 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 03:35:40.463954 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 03:35:40.463960 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 03:35:40.463965 | orchestrator | ++ export ARA=false 2025-06-01 03:35:40.463970 | orchestrator | ++ ARA=false 2025-06-01 03:35:40.463975 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 03:35:40.463980 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 03:35:40.463985 | orchestrator | ++ export TEMPEST=true 2025-06-01 03:35:40.463989 | orchestrator | ++ TEMPEST=true 2025-06-01 03:35:40.463994 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 03:35:40.463999 | orchestrator | ++ IS_ZUUL=true 2025-06-01 03:35:40.464003 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:35:40.464008 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 03:35:40.464013 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 03:35:40.464017 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 03:35:40.464022 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 03:35:40.464027 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 03:35:40.464031 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 03:35:40.464044 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 03:35:40.464073 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 03:35:40.464079 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 03:35:40.464134 | orchestrator | 2025-06-01 03:35:40.464141 | orchestrator | # PULL IMAGES 2025-06-01 03:35:40.464146 | orchestrator | 2025-06-01 03:35:40.464158 | orchestrator | + echo 2025-06-01 03:35:40.464164 | orchestrator | + echo '# PULL IMAGES' 2025-06-01 03:35:40.464168 | orchestrator | + echo 2025-06-01 03:35:40.464419 | orchestrator | ++ semver latest 7.0.0 2025-06-01 03:35:40.521941 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-01 03:35:40.522008 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 03:35:40.522135 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-01 03:35:42.264635 | orchestrator | 2025-06-01 03:35:42 | INFO  | Trying to run play pull-images in environment custom 2025-06-01 03:35:42.270222 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:35:42.270293 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:35:42.270308 | orchestrator | Registering Redlock._release_script 2025-06-01 03:35:42.331748 | orchestrator | 2025-06-01 03:35:42 | INFO  | Task 06829485-b825-49fe-bd04-0a8c75d3ef27 (pull-images) was prepared for execution. 2025-06-01 03:35:42.331834 | orchestrator | 2025-06-01 03:35:42 | INFO  | It takes a moment until task 06829485-b825-49fe-bd04-0a8c75d3ef27 (pull-images) has been started and output is visible here. 2025-06-01 03:35:46.150855 | orchestrator | 2025-06-01 03:35:46.151052 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-01 03:35:46.151685 | orchestrator | 2025-06-01 03:35:46.151920 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-01 03:35:46.153237 | orchestrator | Sunday 01 June 2025 03:35:46 +0000 (0:00:00.122) 0:00:00.122 *********** 2025-06-01 03:36:49.243706 | orchestrator | changed: [testbed-manager] 2025-06-01 03:36:49.243939 | orchestrator | 2025-06-01 03:36:49.243962 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-01 03:36:49.243976 | orchestrator | Sunday 01 June 2025 03:36:49 +0000 (0:01:03.087) 0:01:03.210 *********** 2025-06-01 03:37:46.844685 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-01 03:37:46.845028 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-01 03:37:46.845057 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-01 03:37:46.847460 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-01 03:37:46.848412 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-01 03:37:46.849926 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-01 03:37:46.850954 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-01 03:37:46.851295 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-01 03:37:46.853962 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-01 03:37:46.854578 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-01 03:37:46.855371 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-01 03:37:46.855938 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-01 03:37:46.856770 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-01 03:37:46.856932 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-01 03:37:46.857766 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-01 03:37:46.860962 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-01 03:37:46.861035 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-01 03:37:46.861051 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-01 03:37:46.861062 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-01 03:37:46.862687 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-01 03:37:46.862737 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-01 03:37:46.863287 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-01 03:37:46.863489 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-01 03:37:46.863872 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-01 03:37:46.864426 | orchestrator | 2025-06-01 03:37:46.865095 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:37:46.865811 | orchestrator | 2025-06-01 03:37:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:37:46.866541 | orchestrator | 2025-06-01 03:37:46 | INFO  | Please wait and do not abort execution. 2025-06-01 03:37:46.867236 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 03:37:46.867642 | orchestrator | 2025-06-01 03:37:46.868020 | orchestrator | 2025-06-01 03:37:46.868450 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:37:46.868832 | orchestrator | Sunday 01 June 2025 03:37:46 +0000 (0:00:57.600) 0:02:00.810 *********** 2025-06-01 03:37:46.869272 | orchestrator | =============================================================================== 2025-06-01 03:37:46.869718 | orchestrator | Pull keystone image ---------------------------------------------------- 63.09s 2025-06-01 03:37:46.870583 | orchestrator | Pull other images ------------------------------------------------------ 57.60s 2025-06-01 03:37:49.379780 | orchestrator | 2025-06-01 03:37:49 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-01 03:37:49.385217 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:37:49.385268 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:37:49.385396 | orchestrator | Registering Redlock._release_script 2025-06-01 03:37:49.448836 | orchestrator | 2025-06-01 03:37:49 | INFO  | Task d5ae7bcd-1024-4f22-b797-2bf7180fd612 (wipe-partitions) was prepared for execution. 2025-06-01 03:37:49.448927 | orchestrator | 2025-06-01 03:37:49 | INFO  | It takes a moment until task d5ae7bcd-1024-4f22-b797-2bf7180fd612 (wipe-partitions) has been started and output is visible here. 2025-06-01 03:37:53.326773 | orchestrator | 2025-06-01 03:37:53.326856 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-01 03:37:53.327084 | orchestrator | 2025-06-01 03:37:53.327648 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-01 03:37:53.328494 | orchestrator | Sunday 01 June 2025 03:37:53 +0000 (0:00:00.156) 0:00:00.156 *********** 2025-06-01 03:37:53.901462 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:37:53.901560 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:37:53.901638 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:37:53.901971 | orchestrator | 2025-06-01 03:37:53.902279 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-01 03:37:53.902531 | orchestrator | Sunday 01 June 2025 03:37:53 +0000 (0:00:00.573) 0:00:00.729 *********** 2025-06-01 03:37:54.049811 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:37:54.140738 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:37:54.141719 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:37:54.141750 | orchestrator | 2025-06-01 03:37:54.141764 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-01 03:37:54.141778 | orchestrator | Sunday 01 June 2025 03:37:54 +0000 (0:00:00.238) 0:00:00.968 *********** 2025-06-01 03:37:54.841913 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:37:54.842067 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:37:54.842155 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:37:54.843177 | orchestrator | 2025-06-01 03:37:54.843501 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-01 03:37:54.843808 | orchestrator | Sunday 01 June 2025 03:37:54 +0000 (0:00:00.702) 0:00:01.671 *********** 2025-06-01 03:37:55.023031 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:37:55.115999 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:37:55.118477 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:37:55.118669 | orchestrator | 2025-06-01 03:37:55.119023 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-01 03:37:55.119435 | orchestrator | Sunday 01 June 2025 03:37:55 +0000 (0:00:00.274) 0:00:01.946 *********** 2025-06-01 03:37:56.315225 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 03:37:56.316363 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 03:37:56.320292 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 03:37:56.320590 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 03:37:56.321864 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 03:37:56.323207 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 03:37:56.324142 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 03:37:56.325617 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 03:37:56.326442 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 03:37:56.327817 | orchestrator | 2025-06-01 03:37:56.328385 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-01 03:37:56.331994 | orchestrator | Sunday 01 June 2025 03:37:56 +0000 (0:00:01.199) 0:00:03.145 *********** 2025-06-01 03:37:57.624219 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 03:37:57.625098 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 03:37:57.625148 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 03:37:57.625481 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 03:37:57.626595 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 03:37:57.626749 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 03:37:57.627075 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 03:37:57.627544 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 03:37:57.627985 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 03:37:57.628306 | orchestrator | 2025-06-01 03:37:57.628847 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-01 03:37:57.628999 | orchestrator | Sunday 01 June 2025 03:37:57 +0000 (0:00:01.306) 0:00:04.452 *********** 2025-06-01 03:37:59.885943 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 03:37:59.886114 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 03:37:59.886206 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 03:37:59.886224 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 03:37:59.886261 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 03:37:59.886273 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 03:37:59.886399 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 03:37:59.887110 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 03:37:59.887559 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 03:37:59.887999 | orchestrator | 2025-06-01 03:37:59.888430 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-01 03:37:59.888769 | orchestrator | Sunday 01 June 2025 03:37:59 +0000 (0:00:02.244) 0:00:06.696 *********** 2025-06-01 03:38:00.491692 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:38:00.491815 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:38:00.494405 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:38:00.494990 | orchestrator | 2025-06-01 03:38:00.496374 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-01 03:38:00.496776 | orchestrator | Sunday 01 June 2025 03:38:00 +0000 (0:00:00.621) 0:00:07.318 *********** 2025-06-01 03:38:01.156266 | orchestrator | changed: [testbed-node-3] 2025-06-01 03:38:01.156499 | orchestrator | changed: [testbed-node-4] 2025-06-01 03:38:01.157268 | orchestrator | changed: [testbed-node-5] 2025-06-01 03:38:01.157568 | orchestrator | 2025-06-01 03:38:01.161588 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:38:01.161696 | orchestrator | 2025-06-01 03:38:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:38:01.162642 | orchestrator | 2025-06-01 03:38:01 | INFO  | Please wait and do not abort execution. 2025-06-01 03:38:01.163563 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:01.164430 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:01.164744 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:01.166122 | orchestrator | 2025-06-01 03:38:01.166230 | orchestrator | 2025-06-01 03:38:01.167167 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:38:01.168069 | orchestrator | Sunday 01 June 2025 03:38:01 +0000 (0:00:00.662) 0:00:07.981 *********** 2025-06-01 03:38:01.169391 | orchestrator | =============================================================================== 2025-06-01 03:38:01.170353 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.24s 2025-06-01 03:38:01.171213 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2025-06-01 03:38:01.172521 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2025-06-01 03:38:01.172942 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-06-01 03:38:01.174236 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-06-01 03:38:01.175406 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-06-01 03:38:01.177506 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-06-01 03:38:01.177554 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-06-01 03:38:01.178141 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-06-01 03:38:03.705858 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:38:03.706136 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:38:03.706164 | orchestrator | Registering Redlock._release_script 2025-06-01 03:38:03.761870 | orchestrator | 2025-06-01 03:38:03 | INFO  | Task d12177bf-644b-4693-bdb0-e365234f0300 (facts) was prepared for execution. 2025-06-01 03:38:03.761977 | orchestrator | 2025-06-01 03:38:03 | INFO  | It takes a moment until task d12177bf-644b-4693-bdb0-e365234f0300 (facts) has been started and output is visible here. 2025-06-01 03:38:07.828481 | orchestrator | 2025-06-01 03:38:07.828596 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 03:38:07.828672 | orchestrator | 2025-06-01 03:38:07.829125 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 03:38:07.829655 | orchestrator | Sunday 01 June 2025 03:38:07 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-06-01 03:38:08.638359 | orchestrator | ok: [testbed-manager] 2025-06-01 03:38:09.110576 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:38:09.110688 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:38:09.110703 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:38:09.110772 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:38:09.111731 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:38:09.112131 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:38:09.115528 | orchestrator | 2025-06-01 03:38:09.116269 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 03:38:09.117964 | orchestrator | Sunday 01 June 2025 03:38:09 +0000 (0:00:01.277) 0:00:01.555 *********** 2025-06-01 03:38:09.322859 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:38:09.438861 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:38:09.538290 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:38:09.632724 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:38:09.710776 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:10.554417 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:10.555824 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:10.557060 | orchestrator | 2025-06-01 03:38:10.558173 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 03:38:10.559371 | orchestrator | 2025-06-01 03:38:10.560529 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 03:38:10.561501 | orchestrator | Sunday 01 June 2025 03:38:10 +0000 (0:00:01.449) 0:00:03.004 *********** 2025-06-01 03:38:15.355202 | orchestrator | ok: [testbed-node-1] 2025-06-01 03:38:15.355476 | orchestrator | ok: [testbed-node-2] 2025-06-01 03:38:15.355823 | orchestrator | ok: [testbed-node-0] 2025-06-01 03:38:15.356456 | orchestrator | ok: [testbed-manager] 2025-06-01 03:38:15.357151 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:38:15.357632 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:38:15.358117 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:38:15.358849 | orchestrator | 2025-06-01 03:38:15.362256 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 03:38:15.363893 | orchestrator | 2025-06-01 03:38:15.364372 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 03:38:15.365504 | orchestrator | Sunday 01 June 2025 03:38:15 +0000 (0:00:04.802) 0:00:07.807 *********** 2025-06-01 03:38:15.796229 | orchestrator | skipping: [testbed-manager] 2025-06-01 03:38:15.915225 | orchestrator | skipping: [testbed-node-0] 2025-06-01 03:38:16.035203 | orchestrator | skipping: [testbed-node-1] 2025-06-01 03:38:16.137920 | orchestrator | skipping: [testbed-node-2] 2025-06-01 03:38:16.221426 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:16.265158 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:16.265957 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:16.270188 | orchestrator | 2025-06-01 03:38:16.270232 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:38:16.270247 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.270261 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.270296 | orchestrator | 2025-06-01 03:38:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:38:16.270367 | orchestrator | 2025-06-01 03:38:16 | INFO  | Please wait and do not abort execution. 2025-06-01 03:38:16.271159 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.272205 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.273227 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.275678 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.276028 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 03:38:16.276523 | orchestrator | 2025-06-01 03:38:16.277486 | orchestrator | 2025-06-01 03:38:16.278175 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:38:16.279481 | orchestrator | Sunday 01 June 2025 03:38:16 +0000 (0:00:00.909) 0:00:08.716 *********** 2025-06-01 03:38:16.279804 | orchestrator | =============================================================================== 2025-06-01 03:38:16.280628 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2025-06-01 03:38:16.281059 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2025-06-01 03:38:16.281819 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.28s 2025-06-01 03:38:16.282560 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.91s 2025-06-01 03:38:18.968397 | orchestrator | 2025-06-01 03:38:18 | INFO  | Task 0d972c16-eee2-4770-a65a-12d716e7d035 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-01 03:38:18.968499 | orchestrator | 2025-06-01 03:38:18 | INFO  | It takes a moment until task 0d972c16-eee2-4770-a65a-12d716e7d035 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-01 03:38:25.539384 | orchestrator | 2025-06-01 03:38:25.540237 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 03:38:25.546408 | orchestrator | 2025-06-01 03:38:25.546448 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 03:38:25.546669 | orchestrator | Sunday 01 June 2025 03:38:25 +0000 (0:00:00.440) 0:00:00.440 *********** 2025-06-01 03:38:25.789773 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 03:38:25.791445 | orchestrator | 2025-06-01 03:38:25.792721 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 03:38:25.797207 | orchestrator | Sunday 01 June 2025 03:38:25 +0000 (0:00:00.256) 0:00:00.697 *********** 2025-06-01 03:38:26.041548 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:38:26.042276 | orchestrator | 2025-06-01 03:38:26.043250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:26.044403 | orchestrator | Sunday 01 June 2025 03:38:26 +0000 (0:00:00.250) 0:00:00.947 *********** 2025-06-01 03:38:26.447455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-01 03:38:26.447590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-01 03:38:26.449675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-01 03:38:26.451195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-01 03:38:26.451219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-01 03:38:26.454971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-01 03:38:26.455168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-01 03:38:26.458337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-01 03:38:26.459082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-01 03:38:26.459422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-01 03:38:26.460780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-01 03:38:26.462539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-01 03:38:26.466200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-01 03:38:26.466244 | orchestrator | 2025-06-01 03:38:26.466252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:26.466705 | orchestrator | Sunday 01 June 2025 03:38:26 +0000 (0:00:00.405) 0:00:01.353 *********** 2025-06-01 03:38:27.222942 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:27.223910 | orchestrator | 2025-06-01 03:38:27.224713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:27.226453 | orchestrator | Sunday 01 June 2025 03:38:27 +0000 (0:00:00.774) 0:00:02.127 *********** 2025-06-01 03:38:27.413623 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:27.414679 | orchestrator | 2025-06-01 03:38:27.415375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:27.416216 | orchestrator | Sunday 01 June 2025 03:38:27 +0000 (0:00:00.195) 0:00:02.323 *********** 2025-06-01 03:38:27.576101 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:27.576579 | orchestrator | 2025-06-01 03:38:27.577710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:27.577807 | orchestrator | Sunday 01 June 2025 03:38:27 +0000 (0:00:00.162) 0:00:02.485 *********** 2025-06-01 03:38:27.724772 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:27.725433 | orchestrator | 2025-06-01 03:38:27.726113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:27.726415 | orchestrator | Sunday 01 June 2025 03:38:27 +0000 (0:00:00.147) 0:00:02.633 *********** 2025-06-01 03:38:27.897881 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:27.899499 | orchestrator | 2025-06-01 03:38:27.899724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:27.900267 | orchestrator | Sunday 01 June 2025 03:38:27 +0000 (0:00:00.170) 0:00:02.804 *********** 2025-06-01 03:38:28.067684 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:28.068858 | orchestrator | 2025-06-01 03:38:28.072817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:28.072846 | orchestrator | Sunday 01 June 2025 03:38:28 +0000 (0:00:00.172) 0:00:02.976 *********** 2025-06-01 03:38:28.243775 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:28.245498 | orchestrator | 2025-06-01 03:38:28.247103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:28.247608 | orchestrator | Sunday 01 June 2025 03:38:28 +0000 (0:00:00.175) 0:00:03.152 *********** 2025-06-01 03:38:28.433010 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:28.434517 | orchestrator | 2025-06-01 03:38:28.437438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:28.438619 | orchestrator | Sunday 01 June 2025 03:38:28 +0000 (0:00:00.188) 0:00:03.340 *********** 2025-06-01 03:38:28.810290 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b) 2025-06-01 03:38:28.810996 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b) 2025-06-01 03:38:28.811495 | orchestrator | 2025-06-01 03:38:28.812539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:28.813394 | orchestrator | Sunday 01 June 2025 03:38:28 +0000 (0:00:00.378) 0:00:03.718 *********** 2025-06-01 03:38:29.166631 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85) 2025-06-01 03:38:29.167568 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85) 2025-06-01 03:38:29.168743 | orchestrator | 2025-06-01 03:38:29.172836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:29.173260 | orchestrator | Sunday 01 June 2025 03:38:29 +0000 (0:00:00.355) 0:00:04.073 *********** 2025-06-01 03:38:29.662006 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087) 2025-06-01 03:38:29.664680 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087) 2025-06-01 03:38:29.665482 | orchestrator | 2025-06-01 03:38:29.666575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:29.668187 | orchestrator | Sunday 01 June 2025 03:38:29 +0000 (0:00:00.497) 0:00:04.571 *********** 2025-06-01 03:38:30.220124 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9) 2025-06-01 03:38:30.220226 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9) 2025-06-01 03:38:30.220242 | orchestrator | 2025-06-01 03:38:30.220609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:30.221002 | orchestrator | Sunday 01 June 2025 03:38:30 +0000 (0:00:00.553) 0:00:05.125 *********** 2025-06-01 03:38:30.815284 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 03:38:30.815522 | orchestrator | 2025-06-01 03:38:30.816853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:30.817496 | orchestrator | Sunday 01 June 2025 03:38:30 +0000 (0:00:00.598) 0:00:05.724 *********** 2025-06-01 03:38:31.202903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-01 03:38:31.204956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-01 03:38:31.204992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-01 03:38:31.205005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-01 03:38:31.205772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-01 03:38:31.206624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-01 03:38:31.207560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-01 03:38:31.209701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-01 03:38:31.209736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-01 03:38:31.209786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-01 03:38:31.210663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-01 03:38:31.211352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-01 03:38:31.213363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-01 03:38:31.213431 | orchestrator | 2025-06-01 03:38:31.216534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:31.216591 | orchestrator | Sunday 01 June 2025 03:38:31 +0000 (0:00:00.386) 0:00:06.111 *********** 2025-06-01 03:38:31.421756 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:31.424051 | orchestrator | 2025-06-01 03:38:31.424457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:31.426834 | orchestrator | Sunday 01 June 2025 03:38:31 +0000 (0:00:00.217) 0:00:06.328 *********** 2025-06-01 03:38:31.629969 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:31.631407 | orchestrator | 2025-06-01 03:38:31.631674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:31.632341 | orchestrator | Sunday 01 June 2025 03:38:31 +0000 (0:00:00.208) 0:00:06.537 *********** 2025-06-01 03:38:31.823643 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:31.823737 | orchestrator | 2025-06-01 03:38:31.824471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:31.827885 | orchestrator | Sunday 01 June 2025 03:38:31 +0000 (0:00:00.189) 0:00:06.727 *********** 2025-06-01 03:38:31.993102 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:31.994388 | orchestrator | 2025-06-01 03:38:31.994668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:31.994990 | orchestrator | Sunday 01 June 2025 03:38:31 +0000 (0:00:00.172) 0:00:06.899 *********** 2025-06-01 03:38:32.173092 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:32.173286 | orchestrator | 2025-06-01 03:38:32.173396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:32.173857 | orchestrator | Sunday 01 June 2025 03:38:32 +0000 (0:00:00.182) 0:00:07.081 *********** 2025-06-01 03:38:32.362386 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:32.362571 | orchestrator | 2025-06-01 03:38:32.363003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:32.366098 | orchestrator | Sunday 01 June 2025 03:38:32 +0000 (0:00:00.190) 0:00:07.272 *********** 2025-06-01 03:38:32.508522 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:32.508626 | orchestrator | 2025-06-01 03:38:32.508747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:32.508828 | orchestrator | Sunday 01 June 2025 03:38:32 +0000 (0:00:00.146) 0:00:07.418 *********** 2025-06-01 03:38:32.704503 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:32.704687 | orchestrator | 2025-06-01 03:38:32.705278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:32.705331 | orchestrator | Sunday 01 June 2025 03:38:32 +0000 (0:00:00.190) 0:00:07.608 *********** 2025-06-01 03:38:33.579997 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-01 03:38:33.580106 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-01 03:38:33.580125 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-01 03:38:33.580488 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-01 03:38:33.580740 | orchestrator | 2025-06-01 03:38:33.583084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:33.583193 | orchestrator | Sunday 01 June 2025 03:38:33 +0000 (0:00:00.878) 0:00:08.487 *********** 2025-06-01 03:38:33.798363 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:33.800806 | orchestrator | 2025-06-01 03:38:33.800841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:33.800856 | orchestrator | Sunday 01 June 2025 03:38:33 +0000 (0:00:00.215) 0:00:08.703 *********** 2025-06-01 03:38:33.969437 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:33.969546 | orchestrator | 2025-06-01 03:38:33.969628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:33.969719 | orchestrator | Sunday 01 June 2025 03:38:33 +0000 (0:00:00.176) 0:00:08.879 *********** 2025-06-01 03:38:34.138272 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:34.140948 | orchestrator | 2025-06-01 03:38:34.141075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:34.141410 | orchestrator | Sunday 01 June 2025 03:38:34 +0000 (0:00:00.168) 0:00:09.048 *********** 2025-06-01 03:38:34.354198 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:34.354358 | orchestrator | 2025-06-01 03:38:34.354376 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 03:38:34.354473 | orchestrator | Sunday 01 June 2025 03:38:34 +0000 (0:00:00.215) 0:00:09.263 *********** 2025-06-01 03:38:34.507558 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-01 03:38:34.507776 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-01 03:38:34.508396 | orchestrator | 2025-06-01 03:38:34.508427 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 03:38:34.508441 | orchestrator | Sunday 01 June 2025 03:38:34 +0000 (0:00:00.151) 0:00:09.414 *********** 2025-06-01 03:38:34.615475 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:34.615564 | orchestrator | 2025-06-01 03:38:34.618422 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 03:38:34.618465 | orchestrator | Sunday 01 June 2025 03:38:34 +0000 (0:00:00.108) 0:00:09.523 *********** 2025-06-01 03:38:34.728883 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:34.729068 | orchestrator | 2025-06-01 03:38:34.729088 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 03:38:34.729436 | orchestrator | Sunday 01 June 2025 03:38:34 +0000 (0:00:00.112) 0:00:09.636 *********** 2025-06-01 03:38:34.841034 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:34.841644 | orchestrator | 2025-06-01 03:38:34.841962 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 03:38:34.842228 | orchestrator | Sunday 01 June 2025 03:38:34 +0000 (0:00:00.114) 0:00:09.751 *********** 2025-06-01 03:38:35.026385 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:38:35.026569 | orchestrator | 2025-06-01 03:38:35.027756 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 03:38:35.027784 | orchestrator | Sunday 01 June 2025 03:38:35 +0000 (0:00:00.183) 0:00:09.934 *********** 2025-06-01 03:38:35.196699 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24633ad7-3e48-5d36-bc1c-15adae99ed01'}}) 2025-06-01 03:38:35.196794 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2a6257e3-2619-5e00-b9d8-6074ce245854'}}) 2025-06-01 03:38:35.196889 | orchestrator | 2025-06-01 03:38:35.197039 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 03:38:35.198773 | orchestrator | Sunday 01 June 2025 03:38:35 +0000 (0:00:00.171) 0:00:10.106 *********** 2025-06-01 03:38:35.328448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24633ad7-3e48-5d36-bc1c-15adae99ed01'}})  2025-06-01 03:38:35.328560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2a6257e3-2619-5e00-b9d8-6074ce245854'}})  2025-06-01 03:38:35.331428 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:35.331629 | orchestrator | 2025-06-01 03:38:35.332327 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 03:38:35.332598 | orchestrator | Sunday 01 June 2025 03:38:35 +0000 (0:00:00.131) 0:00:10.238 *********** 2025-06-01 03:38:35.622383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24633ad7-3e48-5d36-bc1c-15adae99ed01'}})  2025-06-01 03:38:35.625331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2a6257e3-2619-5e00-b9d8-6074ce245854'}})  2025-06-01 03:38:35.627866 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:35.627913 | orchestrator | 2025-06-01 03:38:35.627935 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 03:38:35.627955 | orchestrator | Sunday 01 June 2025 03:38:35 +0000 (0:00:00.292) 0:00:10.530 *********** 2025-06-01 03:38:35.751078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24633ad7-3e48-5d36-bc1c-15adae99ed01'}})  2025-06-01 03:38:35.751350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2a6257e3-2619-5e00-b9d8-6074ce245854'}})  2025-06-01 03:38:35.751373 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:35.751645 | orchestrator | 2025-06-01 03:38:35.751928 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 03:38:35.752226 | orchestrator | Sunday 01 June 2025 03:38:35 +0000 (0:00:00.128) 0:00:10.658 *********** 2025-06-01 03:38:35.874687 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:38:35.875725 | orchestrator | 2025-06-01 03:38:35.877487 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 03:38:35.877578 | orchestrator | Sunday 01 June 2025 03:38:35 +0000 (0:00:00.125) 0:00:10.784 *********** 2025-06-01 03:38:36.054906 | orchestrator | ok: [testbed-node-3] 2025-06-01 03:38:36.059063 | orchestrator | 2025-06-01 03:38:36.059126 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 03:38:36.059141 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.178) 0:00:10.963 *********** 2025-06-01 03:38:36.187458 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:36.190567 | orchestrator | 2025-06-01 03:38:36.190603 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 03:38:36.190935 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.133) 0:00:11.096 *********** 2025-06-01 03:38:36.323598 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:36.323690 | orchestrator | 2025-06-01 03:38:36.326443 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 03:38:36.326938 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.133) 0:00:11.230 *********** 2025-06-01 03:38:36.459141 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:36.459238 | orchestrator | 2025-06-01 03:38:36.460052 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 03:38:36.461880 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.136) 0:00:11.367 *********** 2025-06-01 03:38:36.592265 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 03:38:36.592446 | orchestrator |  "ceph_osd_devices": { 2025-06-01 03:38:36.592587 | orchestrator |  "sdb": { 2025-06-01 03:38:36.592610 | orchestrator |  "osd_lvm_uuid": "24633ad7-3e48-5d36-bc1c-15adae99ed01" 2025-06-01 03:38:36.595994 | orchestrator |  }, 2025-06-01 03:38:36.596260 | orchestrator |  "sdc": { 2025-06-01 03:38:36.596447 | orchestrator |  "osd_lvm_uuid": "2a6257e3-2619-5e00-b9d8-6074ce245854" 2025-06-01 03:38:36.596722 | orchestrator |  } 2025-06-01 03:38:36.596967 | orchestrator |  } 2025-06-01 03:38:36.597206 | orchestrator | } 2025-06-01 03:38:36.597409 | orchestrator | 2025-06-01 03:38:36.597763 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 03:38:36.597976 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.134) 0:00:11.502 *********** 2025-06-01 03:38:36.720710 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:36.720959 | orchestrator | 2025-06-01 03:38:36.721105 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 03:38:36.723830 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.128) 0:00:11.630 *********** 2025-06-01 03:38:36.836074 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:36.836319 | orchestrator | 2025-06-01 03:38:36.836349 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 03:38:36.836418 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.112) 0:00:11.742 *********** 2025-06-01 03:38:36.933990 | orchestrator | skipping: [testbed-node-3] 2025-06-01 03:38:36.934168 | orchestrator | 2025-06-01 03:38:36.934331 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 03:38:36.934355 | orchestrator | Sunday 01 June 2025 03:38:36 +0000 (0:00:00.101) 0:00:11.844 *********** 2025-06-01 03:38:37.078895 | orchestrator | changed: [testbed-node-3] => { 2025-06-01 03:38:37.079431 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 03:38:37.079566 | orchestrator |  "ceph_osd_devices": { 2025-06-01 03:38:37.080974 | orchestrator |  "sdb": { 2025-06-01 03:38:37.081126 | orchestrator |  "osd_lvm_uuid": "24633ad7-3e48-5d36-bc1c-15adae99ed01" 2025-06-01 03:38:37.081261 | orchestrator |  }, 2025-06-01 03:38:37.081510 | orchestrator |  "sdc": { 2025-06-01 03:38:37.081877 | orchestrator |  "osd_lvm_uuid": "2a6257e3-2619-5e00-b9d8-6074ce245854" 2025-06-01 03:38:37.084580 | orchestrator |  } 2025-06-01 03:38:37.084891 | orchestrator |  }, 2025-06-01 03:38:37.084958 | orchestrator |  "lvm_volumes": [ 2025-06-01 03:38:37.085137 | orchestrator |  { 2025-06-01 03:38:37.085388 | orchestrator |  "data": "osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01", 2025-06-01 03:38:37.085620 | orchestrator |  "data_vg": "ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01" 2025-06-01 03:38:37.085766 | orchestrator |  }, 2025-06-01 03:38:37.086059 | orchestrator |  { 2025-06-01 03:38:37.086223 | orchestrator |  "data": "osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854", 2025-06-01 03:38:37.086437 | orchestrator |  "data_vg": "ceph-2a6257e3-2619-5e00-b9d8-6074ce245854" 2025-06-01 03:38:37.086660 | orchestrator |  } 2025-06-01 03:38:37.086833 | orchestrator |  ] 2025-06-01 03:38:37.087056 | orchestrator |  } 2025-06-01 03:38:37.087326 | orchestrator | } 2025-06-01 03:38:37.087801 | orchestrator | 2025-06-01 03:38:37.087838 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 03:38:37.087993 | orchestrator | Sunday 01 June 2025 03:38:37 +0000 (0:00:00.144) 0:00:11.988 *********** 2025-06-01 03:38:38.874484 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 03:38:38.878130 | orchestrator | 2025-06-01 03:38:38.878167 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 03:38:38.879067 | orchestrator | 2025-06-01 03:38:38.880843 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 03:38:38.884793 | orchestrator | Sunday 01 June 2025 03:38:38 +0000 (0:00:01.793) 0:00:13.781 *********** 2025-06-01 03:38:39.119468 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 03:38:39.122514 | orchestrator | 2025-06-01 03:38:39.122864 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 03:38:39.124282 | orchestrator | Sunday 01 June 2025 03:38:39 +0000 (0:00:00.236) 0:00:14.018 *********** 2025-06-01 03:38:39.342731 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:38:39.344833 | orchestrator | 2025-06-01 03:38:39.347023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:39.347527 | orchestrator | Sunday 01 June 2025 03:38:39 +0000 (0:00:00.232) 0:00:14.250 *********** 2025-06-01 03:38:39.727424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-01 03:38:39.727546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-01 03:38:39.728043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-01 03:38:39.729177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-01 03:38:39.729664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-01 03:38:39.730395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-01 03:38:39.732015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-01 03:38:39.732219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-01 03:38:39.732562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-01 03:38:39.733118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-01 03:38:39.733507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-01 03:38:39.733691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-01 03:38:39.734083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-01 03:38:39.734281 | orchestrator | 2025-06-01 03:38:39.734714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:39.734890 | orchestrator | Sunday 01 June 2025 03:38:39 +0000 (0:00:00.386) 0:00:14.637 *********** 2025-06-01 03:38:39.901829 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:39.902611 | orchestrator | 2025-06-01 03:38:39.902732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:39.902972 | orchestrator | Sunday 01 June 2025 03:38:39 +0000 (0:00:00.173) 0:00:14.811 *********** 2025-06-01 03:38:40.176005 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:40.176117 | orchestrator | 2025-06-01 03:38:40.176210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:40.176382 | orchestrator | Sunday 01 June 2025 03:38:40 +0000 (0:00:00.273) 0:00:15.084 *********** 2025-06-01 03:38:40.351959 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:40.352762 | orchestrator | 2025-06-01 03:38:40.355952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:40.358746 | orchestrator | Sunday 01 June 2025 03:38:40 +0000 (0:00:00.172) 0:00:15.257 *********** 2025-06-01 03:38:40.556561 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:40.558880 | orchestrator | 2025-06-01 03:38:40.559020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:40.559398 | orchestrator | Sunday 01 June 2025 03:38:40 +0000 (0:00:00.209) 0:00:15.466 *********** 2025-06-01 03:38:40.994822 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:40.995343 | orchestrator | 2025-06-01 03:38:40.998179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:40.999208 | orchestrator | Sunday 01 June 2025 03:38:40 +0000 (0:00:00.437) 0:00:15.903 *********** 2025-06-01 03:38:41.177808 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:41.179109 | orchestrator | 2025-06-01 03:38:41.181482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:41.181568 | orchestrator | Sunday 01 June 2025 03:38:41 +0000 (0:00:00.181) 0:00:16.084 *********** 2025-06-01 03:38:41.368858 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:41.368965 | orchestrator | 2025-06-01 03:38:41.368983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:41.368997 | orchestrator | Sunday 01 June 2025 03:38:41 +0000 (0:00:00.190) 0:00:16.275 *********** 2025-06-01 03:38:41.538139 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:41.538350 | orchestrator | 2025-06-01 03:38:41.538602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:41.538810 | orchestrator | Sunday 01 June 2025 03:38:41 +0000 (0:00:00.172) 0:00:16.447 *********** 2025-06-01 03:38:41.920816 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182) 2025-06-01 03:38:41.923128 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182) 2025-06-01 03:38:41.923257 | orchestrator | 2025-06-01 03:38:41.923366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:41.923661 | orchestrator | Sunday 01 June 2025 03:38:41 +0000 (0:00:00.381) 0:00:16.829 *********** 2025-06-01 03:38:42.310414 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c) 2025-06-01 03:38:42.313074 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c) 2025-06-01 03:38:42.313125 | orchestrator | 2025-06-01 03:38:42.313139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:42.313151 | orchestrator | Sunday 01 June 2025 03:38:42 +0000 (0:00:00.390) 0:00:17.219 *********** 2025-06-01 03:38:42.749443 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79) 2025-06-01 03:38:42.751796 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79) 2025-06-01 03:38:42.752133 | orchestrator | 2025-06-01 03:38:42.752837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:42.753518 | orchestrator | Sunday 01 June 2025 03:38:42 +0000 (0:00:00.435) 0:00:17.654 *********** 2025-06-01 03:38:43.178147 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110) 2025-06-01 03:38:43.180662 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110) 2025-06-01 03:38:43.180716 | orchestrator | 2025-06-01 03:38:43.180737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:43.181146 | orchestrator | Sunday 01 June 2025 03:38:43 +0000 (0:00:00.429) 0:00:18.084 *********** 2025-06-01 03:38:43.525048 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 03:38:43.525141 | orchestrator | 2025-06-01 03:38:43.526366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:43.526533 | orchestrator | Sunday 01 June 2025 03:38:43 +0000 (0:00:00.349) 0:00:18.434 *********** 2025-06-01 03:38:43.916737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-01 03:38:43.919801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-01 03:38:43.919894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-01 03:38:43.921475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-01 03:38:43.922919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-01 03:38:43.924911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-01 03:38:43.925523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-01 03:38:43.928911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-01 03:38:43.928939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-01 03:38:43.929081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-01 03:38:43.929679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-01 03:38:43.930336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-01 03:38:43.930922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-01 03:38:43.931613 | orchestrator | 2025-06-01 03:38:43.934219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:43.934850 | orchestrator | Sunday 01 June 2025 03:38:43 +0000 (0:00:00.390) 0:00:18.824 *********** 2025-06-01 03:38:44.125244 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:44.125752 | orchestrator | 2025-06-01 03:38:44.126585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:44.127413 | orchestrator | Sunday 01 June 2025 03:38:44 +0000 (0:00:00.209) 0:00:19.033 *********** 2025-06-01 03:38:44.878365 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:44.878749 | orchestrator | 2025-06-01 03:38:44.879370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:44.880058 | orchestrator | Sunday 01 June 2025 03:38:44 +0000 (0:00:00.749) 0:00:19.783 *********** 2025-06-01 03:38:45.128813 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:45.190681 | orchestrator | 2025-06-01 03:38:45.190745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:45.190760 | orchestrator | Sunday 01 June 2025 03:38:45 +0000 (0:00:00.253) 0:00:20.037 *********** 2025-06-01 03:38:45.389589 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:45.393045 | orchestrator | 2025-06-01 03:38:45.393074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:45.395112 | orchestrator | Sunday 01 June 2025 03:38:45 +0000 (0:00:00.258) 0:00:20.295 *********** 2025-06-01 03:38:45.593492 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:45.599473 | orchestrator | 2025-06-01 03:38:45.603107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:45.603841 | orchestrator | Sunday 01 June 2025 03:38:45 +0000 (0:00:00.203) 0:00:20.498 *********** 2025-06-01 03:38:45.803762 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:45.807182 | orchestrator | 2025-06-01 03:38:45.808278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:45.809232 | orchestrator | Sunday 01 June 2025 03:38:45 +0000 (0:00:00.210) 0:00:20.708 *********** 2025-06-01 03:38:46.009538 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:46.009646 | orchestrator | 2025-06-01 03:38:46.010536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:46.011474 | orchestrator | Sunday 01 June 2025 03:38:46 +0000 (0:00:00.205) 0:00:20.914 *********** 2025-06-01 03:38:46.222962 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:46.223584 | orchestrator | 2025-06-01 03:38:46.225924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:46.226794 | orchestrator | Sunday 01 June 2025 03:38:46 +0000 (0:00:00.209) 0:00:21.124 *********** 2025-06-01 03:38:46.904141 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-01 03:38:46.905183 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-01 03:38:46.906448 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-01 03:38:46.911067 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-01 03:38:46.912185 | orchestrator | 2025-06-01 03:38:46.913503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:46.914599 | orchestrator | Sunday 01 June 2025 03:38:46 +0000 (0:00:00.687) 0:00:21.812 *********** 2025-06-01 03:38:47.103424 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:47.103873 | orchestrator | 2025-06-01 03:38:47.104445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:47.105854 | orchestrator | Sunday 01 June 2025 03:38:47 +0000 (0:00:00.199) 0:00:22.012 *********** 2025-06-01 03:38:47.306427 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:47.306662 | orchestrator | 2025-06-01 03:38:47.308141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:47.309175 | orchestrator | Sunday 01 June 2025 03:38:47 +0000 (0:00:00.200) 0:00:22.212 *********** 2025-06-01 03:38:47.518987 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:47.520454 | orchestrator | 2025-06-01 03:38:47.525024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:47.526734 | orchestrator | Sunday 01 June 2025 03:38:47 +0000 (0:00:00.207) 0:00:22.420 *********** 2025-06-01 03:38:47.708762 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:47.711060 | orchestrator | 2025-06-01 03:38:47.711646 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 03:38:47.712664 | orchestrator | Sunday 01 June 2025 03:38:47 +0000 (0:00:00.196) 0:00:22.617 *********** 2025-06-01 03:38:48.099175 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-01 03:38:48.101727 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-01 03:38:48.103418 | orchestrator | 2025-06-01 03:38:48.106235 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 03:38:48.106851 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.390) 0:00:23.007 *********** 2025-06-01 03:38:48.234336 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:48.238781 | orchestrator | 2025-06-01 03:38:48.239685 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 03:38:48.242106 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.133) 0:00:23.140 *********** 2025-06-01 03:38:48.367322 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:48.368713 | orchestrator | 2025-06-01 03:38:48.371481 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 03:38:48.371814 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.132) 0:00:23.273 *********** 2025-06-01 03:38:48.501052 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:48.502377 | orchestrator | 2025-06-01 03:38:48.507059 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 03:38:48.507103 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.135) 0:00:23.408 *********** 2025-06-01 03:38:48.642676 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:38:48.644044 | orchestrator | 2025-06-01 03:38:48.648038 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 03:38:48.648070 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.141) 0:00:23.550 *********** 2025-06-01 03:38:48.806082 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'baa7c707-8012-580f-8c9e-09def35a523c'}}) 2025-06-01 03:38:48.807018 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f9d798-cc3d-57c0-9350-8228d94606be'}}) 2025-06-01 03:38:48.810243 | orchestrator | 2025-06-01 03:38:48.810276 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 03:38:48.810320 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.162) 0:00:23.712 *********** 2025-06-01 03:38:48.954119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'baa7c707-8012-580f-8c9e-09def35a523c'}})  2025-06-01 03:38:48.956338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f9d798-cc3d-57c0-9350-8228d94606be'}})  2025-06-01 03:38:48.960691 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:48.960736 | orchestrator | 2025-06-01 03:38:48.962908 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 03:38:48.963630 | orchestrator | Sunday 01 June 2025 03:38:48 +0000 (0:00:00.148) 0:00:23.860 *********** 2025-06-01 03:38:49.102511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'baa7c707-8012-580f-8c9e-09def35a523c'}})  2025-06-01 03:38:49.103539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f9d798-cc3d-57c0-9350-8228d94606be'}})  2025-06-01 03:38:49.104708 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:49.105444 | orchestrator | 2025-06-01 03:38:49.106695 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 03:38:49.107406 | orchestrator | Sunday 01 June 2025 03:38:49 +0000 (0:00:00.150) 0:00:24.011 *********** 2025-06-01 03:38:49.273462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'baa7c707-8012-580f-8c9e-09def35a523c'}})  2025-06-01 03:38:49.276912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f9d798-cc3d-57c0-9350-8228d94606be'}})  2025-06-01 03:38:49.278997 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:49.279025 | orchestrator | 2025-06-01 03:38:49.280362 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 03:38:49.281162 | orchestrator | Sunday 01 June 2025 03:38:49 +0000 (0:00:00.167) 0:00:24.178 *********** 2025-06-01 03:38:49.407716 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:38:49.408534 | orchestrator | 2025-06-01 03:38:49.410458 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 03:38:49.411515 | orchestrator | Sunday 01 June 2025 03:38:49 +0000 (0:00:00.131) 0:00:24.310 *********** 2025-06-01 03:38:49.548774 | orchestrator | ok: [testbed-node-4] 2025-06-01 03:38:49.549952 | orchestrator | 2025-06-01 03:38:49.550626 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 03:38:49.551613 | orchestrator | Sunday 01 June 2025 03:38:49 +0000 (0:00:00.147) 0:00:24.457 *********** 2025-06-01 03:38:49.693135 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:49.693372 | orchestrator | 2025-06-01 03:38:49.693708 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 03:38:49.697764 | orchestrator | Sunday 01 June 2025 03:38:49 +0000 (0:00:00.140) 0:00:24.598 *********** 2025-06-01 03:38:50.033925 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:50.034163 | orchestrator | 2025-06-01 03:38:50.035451 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 03:38:50.036812 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.343) 0:00:24.942 *********** 2025-06-01 03:38:50.185991 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:50.192435 | orchestrator | 2025-06-01 03:38:50.192638 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 03:38:50.193009 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.148) 0:00:25.090 *********** 2025-06-01 03:38:50.330785 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 03:38:50.331959 | orchestrator |  "ceph_osd_devices": { 2025-06-01 03:38:50.332933 | orchestrator |  "sdb": { 2025-06-01 03:38:50.337164 | orchestrator |  "osd_lvm_uuid": "baa7c707-8012-580f-8c9e-09def35a523c" 2025-06-01 03:38:50.337718 | orchestrator |  }, 2025-06-01 03:38:50.338397 | orchestrator |  "sdc": { 2025-06-01 03:38:50.339226 | orchestrator |  "osd_lvm_uuid": "c1f9d798-cc3d-57c0-9350-8228d94606be" 2025-06-01 03:38:50.340088 | orchestrator |  } 2025-06-01 03:38:50.343218 | orchestrator |  } 2025-06-01 03:38:50.343835 | orchestrator | } 2025-06-01 03:38:50.344191 | orchestrator | 2025-06-01 03:38:50.347508 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 03:38:50.347744 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.146) 0:00:25.237 *********** 2025-06-01 03:38:50.478878 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:50.479442 | orchestrator | 2025-06-01 03:38:50.480237 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 03:38:50.480897 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.149) 0:00:25.387 *********** 2025-06-01 03:38:50.616555 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:50.618172 | orchestrator | 2025-06-01 03:38:50.619157 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 03:38:50.620852 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.137) 0:00:25.524 *********** 2025-06-01 03:38:50.752523 | orchestrator | skipping: [testbed-node-4] 2025-06-01 03:38:50.753337 | orchestrator | 2025-06-01 03:38:50.755401 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 03:38:50.761896 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.135) 0:00:25.660 *********** 2025-06-01 03:38:50.965145 | orchestrator | changed: [testbed-node-4] => { 2025-06-01 03:38:50.966004 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 03:38:50.967986 | orchestrator |  "ceph_osd_devices": { 2025-06-01 03:38:50.968011 | orchestrator |  "sdb": { 2025-06-01 03:38:50.968960 | orchestrator |  "osd_lvm_uuid": "baa7c707-8012-580f-8c9e-09def35a523c" 2025-06-01 03:38:50.969829 | orchestrator |  }, 2025-06-01 03:38:50.970452 | orchestrator |  "sdc": { 2025-06-01 03:38:50.971499 | orchestrator |  "osd_lvm_uuid": "c1f9d798-cc3d-57c0-9350-8228d94606be" 2025-06-01 03:38:50.972172 | orchestrator |  } 2025-06-01 03:38:50.974145 | orchestrator |  }, 2025-06-01 03:38:50.975335 | orchestrator |  "lvm_volumes": [ 2025-06-01 03:38:50.976451 | orchestrator |  { 2025-06-01 03:38:50.977020 | orchestrator |  "data": "osd-block-baa7c707-8012-580f-8c9e-09def35a523c", 2025-06-01 03:38:50.980494 | orchestrator |  "data_vg": "ceph-baa7c707-8012-580f-8c9e-09def35a523c" 2025-06-01 03:38:50.981780 | orchestrator |  }, 2025-06-01 03:38:50.982589 | orchestrator |  { 2025-06-01 03:38:50.983763 | orchestrator |  "data": "osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be", 2025-06-01 03:38:50.984554 | orchestrator |  "data_vg": "ceph-c1f9d798-cc3d-57c0-9350-8228d94606be" 2025-06-01 03:38:50.987438 | orchestrator |  } 2025-06-01 03:38:50.989015 | orchestrator |  ] 2025-06-01 03:38:50.990438 | orchestrator |  } 2025-06-01 03:38:50.991473 | orchestrator | } 2025-06-01 03:38:50.992711 | orchestrator | 2025-06-01 03:38:50.994146 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 03:38:50.995749 | orchestrator | Sunday 01 June 2025 03:38:50 +0000 (0:00:00.213) 0:00:25.873 *********** 2025-06-01 03:38:52.172467 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 03:38:52.172708 | orchestrator | 2025-06-01 03:38:52.178652 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 03:38:52.182382 | orchestrator | 2025-06-01 03:38:52.182474 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 03:38:52.182776 | orchestrator | Sunday 01 June 2025 03:38:52 +0000 (0:00:01.204) 0:00:27.077 *********** 2025-06-01 03:38:52.646806 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 03:38:52.648538 | orchestrator | 2025-06-01 03:38:52.651920 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 03:38:52.652675 | orchestrator | Sunday 01 June 2025 03:38:52 +0000 (0:00:00.475) 0:00:27.553 *********** 2025-06-01 03:38:53.384343 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:38:53.384505 | orchestrator | 2025-06-01 03:38:53.384590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:53.389843 | orchestrator | Sunday 01 June 2025 03:38:53 +0000 (0:00:00.735) 0:00:28.288 *********** 2025-06-01 03:38:53.780398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-01 03:38:53.780495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-01 03:38:53.780508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-01 03:38:53.780519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-01 03:38:53.783858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-01 03:38:53.783886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-01 03:38:53.784301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-01 03:38:53.785002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-01 03:38:53.786088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-01 03:38:53.787028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-01 03:38:53.787315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-01 03:38:53.788138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-01 03:38:53.788470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-01 03:38:53.789930 | orchestrator | 2025-06-01 03:38:53.790240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:53.790924 | orchestrator | Sunday 01 June 2025 03:38:53 +0000 (0:00:00.397) 0:00:28.685 *********** 2025-06-01 03:38:53.979361 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:53.980909 | orchestrator | 2025-06-01 03:38:53.981036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:53.984421 | orchestrator | Sunday 01 June 2025 03:38:53 +0000 (0:00:00.199) 0:00:28.885 *********** 2025-06-01 03:38:54.185675 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:54.337721 | orchestrator | 2025-06-01 03:38:54.337840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:54.337859 | orchestrator | Sunday 01 June 2025 03:38:54 +0000 (0:00:00.204) 0:00:29.090 *********** 2025-06-01 03:38:54.392952 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:54.394932 | orchestrator | 2025-06-01 03:38:54.398466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:54.398519 | orchestrator | Sunday 01 June 2025 03:38:54 +0000 (0:00:00.207) 0:00:29.298 *********** 2025-06-01 03:38:54.588348 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:54.588803 | orchestrator | 2025-06-01 03:38:54.592822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:54.592856 | orchestrator | Sunday 01 June 2025 03:38:54 +0000 (0:00:00.196) 0:00:29.494 *********** 2025-06-01 03:38:54.784922 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:54.785018 | orchestrator | 2025-06-01 03:38:54.786805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:54.788037 | orchestrator | Sunday 01 June 2025 03:38:54 +0000 (0:00:00.194) 0:00:29.689 *********** 2025-06-01 03:38:54.977477 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:54.978822 | orchestrator | 2025-06-01 03:38:54.978853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:54.978923 | orchestrator | Sunday 01 June 2025 03:38:54 +0000 (0:00:00.195) 0:00:29.885 *********** 2025-06-01 03:38:55.172975 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:55.173075 | orchestrator | 2025-06-01 03:38:55.173668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:55.174526 | orchestrator | Sunday 01 June 2025 03:38:55 +0000 (0:00:00.195) 0:00:30.081 *********** 2025-06-01 03:38:55.369657 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:55.369758 | orchestrator | 2025-06-01 03:38:55.369772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:55.370123 | orchestrator | Sunday 01 June 2025 03:38:55 +0000 (0:00:00.195) 0:00:30.277 *********** 2025-06-01 03:38:56.012388 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403) 2025-06-01 03:38:56.012489 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403) 2025-06-01 03:38:56.012792 | orchestrator | 2025-06-01 03:38:56.013220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:56.014457 | orchestrator | Sunday 01 June 2025 03:38:56 +0000 (0:00:00.643) 0:00:30.920 *********** 2025-06-01 03:38:56.909503 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af) 2025-06-01 03:38:56.910182 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af) 2025-06-01 03:38:56.910807 | orchestrator | 2025-06-01 03:38:56.911651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:56.912362 | orchestrator | Sunday 01 June 2025 03:38:56 +0000 (0:00:00.895) 0:00:31.816 *********** 2025-06-01 03:38:57.350960 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c) 2025-06-01 03:38:57.351327 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c) 2025-06-01 03:38:57.352129 | orchestrator | 2025-06-01 03:38:57.353318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:57.355117 | orchestrator | Sunday 01 June 2025 03:38:57 +0000 (0:00:00.442) 0:00:32.258 *********** 2025-06-01 03:38:57.770469 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2) 2025-06-01 03:38:57.771188 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2) 2025-06-01 03:38:57.773247 | orchestrator | 2025-06-01 03:38:57.773347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 03:38:57.774160 | orchestrator | Sunday 01 June 2025 03:38:57 +0000 (0:00:00.418) 0:00:32.676 *********** 2025-06-01 03:38:58.101149 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 03:38:58.101808 | orchestrator | 2025-06-01 03:38:58.102548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:58.103023 | orchestrator | Sunday 01 June 2025 03:38:58 +0000 (0:00:00.332) 0:00:33.009 *********** 2025-06-01 03:38:58.551673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-01 03:38:58.552026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-01 03:38:58.553508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-01 03:38:58.554465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-01 03:38:58.555683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-01 03:38:58.557493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-01 03:38:58.557517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-01 03:38:58.557529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-01 03:38:58.558009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-01 03:38:58.559140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-01 03:38:58.560099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-01 03:38:58.561011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-01 03:38:58.562072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-01 03:38:58.562972 | orchestrator | 2025-06-01 03:38:58.563862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:58.564836 | orchestrator | Sunday 01 June 2025 03:38:58 +0000 (0:00:00.447) 0:00:33.457 *********** 2025-06-01 03:38:58.768841 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:58.769319 | orchestrator | 2025-06-01 03:38:58.770010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:58.770671 | orchestrator | Sunday 01 June 2025 03:38:58 +0000 (0:00:00.219) 0:00:33.677 *********** 2025-06-01 03:38:58.976727 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:58.977515 | orchestrator | 2025-06-01 03:38:58.978786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:58.979774 | orchestrator | Sunday 01 June 2025 03:38:58 +0000 (0:00:00.207) 0:00:33.884 *********** 2025-06-01 03:38:59.185994 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:59.186244 | orchestrator | 2025-06-01 03:38:59.187030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:59.187430 | orchestrator | Sunday 01 June 2025 03:38:59 +0000 (0:00:00.209) 0:00:34.093 *********** 2025-06-01 03:38:59.384612 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:59.384712 | orchestrator | 2025-06-01 03:38:59.385407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:59.386078 | orchestrator | Sunday 01 June 2025 03:38:59 +0000 (0:00:00.198) 0:00:34.292 *********** 2025-06-01 03:38:59.588008 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:38:59.588220 | orchestrator | 2025-06-01 03:38:59.589679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:38:59.590315 | orchestrator | Sunday 01 June 2025 03:38:59 +0000 (0:00:00.200) 0:00:34.492 *********** 2025-06-01 03:39:00.315151 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:00.315331 | orchestrator | 2025-06-01 03:39:00.316642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:00.317769 | orchestrator | Sunday 01 June 2025 03:39:00 +0000 (0:00:00.729) 0:00:35.222 *********** 2025-06-01 03:39:00.521696 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:00.522212 | orchestrator | 2025-06-01 03:39:00.523714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:00.524410 | orchestrator | Sunday 01 June 2025 03:39:00 +0000 (0:00:00.205) 0:00:35.427 *********** 2025-06-01 03:39:00.720419 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:00.720570 | orchestrator | 2025-06-01 03:39:00.721684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:00.723000 | orchestrator | Sunday 01 June 2025 03:39:00 +0000 (0:00:00.200) 0:00:35.627 *********** 2025-06-01 03:39:01.359446 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-01 03:39:01.359603 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-01 03:39:01.360450 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-01 03:39:01.360986 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-01 03:39:01.362503 | orchestrator | 2025-06-01 03:39:01.364780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:01.365862 | orchestrator | Sunday 01 June 2025 03:39:01 +0000 (0:00:00.639) 0:00:36.267 *********** 2025-06-01 03:39:01.565847 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:01.566222 | orchestrator | 2025-06-01 03:39:01.567420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:01.568018 | orchestrator | Sunday 01 June 2025 03:39:01 +0000 (0:00:00.204) 0:00:36.471 *********** 2025-06-01 03:39:01.780817 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:01.781435 | orchestrator | 2025-06-01 03:39:01.782941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:01.783552 | orchestrator | Sunday 01 June 2025 03:39:01 +0000 (0:00:00.215) 0:00:36.687 *********** 2025-06-01 03:39:01.991647 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:01.992516 | orchestrator | 2025-06-01 03:39:01.993515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 03:39:01.994224 | orchestrator | Sunday 01 June 2025 03:39:01 +0000 (0:00:00.211) 0:00:36.899 *********** 2025-06-01 03:39:02.180938 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:02.181252 | orchestrator | 2025-06-01 03:39:02.181998 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 03:39:02.182931 | orchestrator | Sunday 01 June 2025 03:39:02 +0000 (0:00:00.188) 0:00:37.087 *********** 2025-06-01 03:39:02.374161 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-01 03:39:02.374797 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-01 03:39:02.375585 | orchestrator | 2025-06-01 03:39:02.376467 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 03:39:02.376956 | orchestrator | Sunday 01 June 2025 03:39:02 +0000 (0:00:00.193) 0:00:37.281 *********** 2025-06-01 03:39:02.509745 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:02.510732 | orchestrator | 2025-06-01 03:39:02.513478 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 03:39:02.514853 | orchestrator | Sunday 01 June 2025 03:39:02 +0000 (0:00:00.134) 0:00:37.416 *********** 2025-06-01 03:39:02.646551 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:02.647186 | orchestrator | 2025-06-01 03:39:02.648157 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 03:39:02.649497 | orchestrator | Sunday 01 June 2025 03:39:02 +0000 (0:00:00.137) 0:00:37.553 *********** 2025-06-01 03:39:02.791722 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:02.792938 | orchestrator | 2025-06-01 03:39:02.793779 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 03:39:02.794994 | orchestrator | Sunday 01 June 2025 03:39:02 +0000 (0:00:00.145) 0:00:37.698 *********** 2025-06-01 03:39:03.151032 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:39:03.152216 | orchestrator | 2025-06-01 03:39:03.153560 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 03:39:03.154763 | orchestrator | Sunday 01 June 2025 03:39:03 +0000 (0:00:00.360) 0:00:38.059 *********** 2025-06-01 03:39:03.327049 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}}) 2025-06-01 03:39:03.327147 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}}) 2025-06-01 03:39:03.327389 | orchestrator | 2025-06-01 03:39:03.327985 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 03:39:03.329131 | orchestrator | Sunday 01 June 2025 03:39:03 +0000 (0:00:00.171) 0:00:38.231 *********** 2025-06-01 03:39:03.470510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}})  2025-06-01 03:39:03.470591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}})  2025-06-01 03:39:03.471334 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:03.472372 | orchestrator | 2025-06-01 03:39:03.473616 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 03:39:03.474262 | orchestrator | Sunday 01 June 2025 03:39:03 +0000 (0:00:00.145) 0:00:38.377 *********** 2025-06-01 03:39:03.625208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}})  2025-06-01 03:39:03.626089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}})  2025-06-01 03:39:03.628030 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:03.628309 | orchestrator | 2025-06-01 03:39:03.632024 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 03:39:03.632518 | orchestrator | Sunday 01 June 2025 03:39:03 +0000 (0:00:00.154) 0:00:38.531 *********** 2025-06-01 03:39:03.774500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}})  2025-06-01 03:39:03.775468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}})  2025-06-01 03:39:03.776877 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:03.778548 | orchestrator | 2025-06-01 03:39:03.778799 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 03:39:03.779911 | orchestrator | Sunday 01 June 2025 03:39:03 +0000 (0:00:00.150) 0:00:38.681 *********** 2025-06-01 03:39:03.915960 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:39:03.916932 | orchestrator | 2025-06-01 03:39:03.917633 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 03:39:03.919091 | orchestrator | Sunday 01 June 2025 03:39:03 +0000 (0:00:00.142) 0:00:38.824 *********** 2025-06-01 03:39:04.055825 | orchestrator | ok: [testbed-node-5] 2025-06-01 03:39:04.056218 | orchestrator | 2025-06-01 03:39:04.057257 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 03:39:04.058104 | orchestrator | Sunday 01 June 2025 03:39:04 +0000 (0:00:00.139) 0:00:38.963 *********** 2025-06-01 03:39:04.206410 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:04.207989 | orchestrator | 2025-06-01 03:39:04.208028 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 03:39:04.208221 | orchestrator | Sunday 01 June 2025 03:39:04 +0000 (0:00:00.144) 0:00:39.108 *********** 2025-06-01 03:39:04.339682 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:04.340235 | orchestrator | 2025-06-01 03:39:04.340511 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 03:39:04.340924 | orchestrator | Sunday 01 June 2025 03:39:04 +0000 (0:00:00.139) 0:00:39.247 *********** 2025-06-01 03:39:04.480399 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:04.480489 | orchestrator | 2025-06-01 03:39:04.481266 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 03:39:04.481985 | orchestrator | Sunday 01 June 2025 03:39:04 +0000 (0:00:00.140) 0:00:39.387 *********** 2025-06-01 03:39:04.609848 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 03:39:04.611112 | orchestrator |  "ceph_osd_devices": { 2025-06-01 03:39:04.611741 | orchestrator |  "sdb": { 2025-06-01 03:39:04.612880 | orchestrator |  "osd_lvm_uuid": "a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f" 2025-06-01 03:39:04.613854 | orchestrator |  }, 2025-06-01 03:39:04.614849 | orchestrator |  "sdc": { 2025-06-01 03:39:04.615659 | orchestrator |  "osd_lvm_uuid": "308e0632-b76f-5a8e-af6f-04e4a02ef5a9" 2025-06-01 03:39:04.616331 | orchestrator |  } 2025-06-01 03:39:04.617088 | orchestrator |  } 2025-06-01 03:39:04.617620 | orchestrator | } 2025-06-01 03:39:04.618243 | orchestrator | 2025-06-01 03:39:04.618984 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 03:39:04.619577 | orchestrator | Sunday 01 June 2025 03:39:04 +0000 (0:00:00.129) 0:00:39.517 *********** 2025-06-01 03:39:04.742073 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:04.742480 | orchestrator | 2025-06-01 03:39:04.743566 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 03:39:04.744242 | orchestrator | Sunday 01 June 2025 03:39:04 +0000 (0:00:00.132) 0:00:39.649 *********** 2025-06-01 03:39:05.103852 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:05.103953 | orchestrator | 2025-06-01 03:39:05.104159 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 03:39:05.104506 | orchestrator | Sunday 01 June 2025 03:39:05 +0000 (0:00:00.362) 0:00:40.012 *********** 2025-06-01 03:39:05.241557 | orchestrator | skipping: [testbed-node-5] 2025-06-01 03:39:05.241974 | orchestrator | 2025-06-01 03:39:05.242652 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 03:39:05.244144 | orchestrator | Sunday 01 June 2025 03:39:05 +0000 (0:00:00.136) 0:00:40.149 *********** 2025-06-01 03:39:05.456643 | orchestrator | changed: [testbed-node-5] => { 2025-06-01 03:39:05.456787 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 03:39:05.457793 | orchestrator |  "ceph_osd_devices": { 2025-06-01 03:39:05.459139 | orchestrator |  "sdb": { 2025-06-01 03:39:05.460096 | orchestrator |  "osd_lvm_uuid": "a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f" 2025-06-01 03:39:05.462093 | orchestrator |  }, 2025-06-01 03:39:05.465071 | orchestrator |  "sdc": { 2025-06-01 03:39:05.465978 | orchestrator |  "osd_lvm_uuid": "308e0632-b76f-5a8e-af6f-04e4a02ef5a9" 2025-06-01 03:39:05.467742 | orchestrator |  } 2025-06-01 03:39:05.468363 | orchestrator |  }, 2025-06-01 03:39:05.469566 | orchestrator |  "lvm_volumes": [ 2025-06-01 03:39:05.470490 | orchestrator |  { 2025-06-01 03:39:05.470829 | orchestrator |  "data": "osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f", 2025-06-01 03:39:05.471853 | orchestrator |  "data_vg": "ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f" 2025-06-01 03:39:05.472576 | orchestrator |  }, 2025-06-01 03:39:05.473680 | orchestrator |  { 2025-06-01 03:39:05.474131 | orchestrator |  "data": "osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9", 2025-06-01 03:39:05.474880 | orchestrator |  "data_vg": "ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9" 2025-06-01 03:39:05.475382 | orchestrator |  } 2025-06-01 03:39:05.476026 | orchestrator |  ] 2025-06-01 03:39:05.476804 | orchestrator |  } 2025-06-01 03:39:05.477548 | orchestrator | } 2025-06-01 03:39:05.477820 | orchestrator | 2025-06-01 03:39:05.478447 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 03:39:05.479048 | orchestrator | Sunday 01 June 2025 03:39:05 +0000 (0:00:00.215) 0:00:40.365 *********** 2025-06-01 03:39:06.447628 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 03:39:06.448730 | orchestrator | 2025-06-01 03:39:06.451512 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 03:39:06.452574 | orchestrator | 2025-06-01 03:39:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 03:39:06.452770 | orchestrator | 2025-06-01 03:39:06 | INFO  | Please wait and do not abort execution. 2025-06-01 03:39:06.454569 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 03:39:06.456236 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 03:39:06.456997 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 03:39:06.458379 | orchestrator | 2025-06-01 03:39:06.458969 | orchestrator | 2025-06-01 03:39:06.459894 | orchestrator | 2025-06-01 03:39:06.460716 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 03:39:06.461200 | orchestrator | Sunday 01 June 2025 03:39:06 +0000 (0:00:00.987) 0:00:41.353 *********** 2025-06-01 03:39:06.462100 | orchestrator | =============================================================================== 2025-06-01 03:39:06.462245 | orchestrator | Write configuration file ------------------------------------------------ 3.99s 2025-06-01 03:39:06.463161 | orchestrator | Add known partitions to the list of available block devices ------------- 1.23s 2025-06-01 03:39:06.463909 | orchestrator | Get initial list of available block devices ----------------------------- 1.22s 2025-06-01 03:39:06.464672 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2025-06-01 03:39:06.465260 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2025-06-01 03:39:06.466322 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-06-01 03:39:06.466736 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-06-01 03:39:06.467074 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-06-01 03:39:06.468179 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-06-01 03:39:06.468614 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.73s 2025-06-01 03:39:06.469564 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-06-01 03:39:06.469586 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-06-01 03:39:06.470479 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.69s 2025-06-01 03:39:06.470994 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-01 03:39:06.471725 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-06-01 03:39:06.472262 | orchestrator | Set WAL devices config data --------------------------------------------- 0.62s 2025-06-01 03:39:06.473182 | orchestrator | Print DB devices -------------------------------------------------------- 0.61s 2025-06-01 03:39:06.473414 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-06-01 03:39:06.474160 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2025-06-01 03:39:06.474960 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-06-01 03:39:18.907593 | orchestrator | Registering Redlock._acquired_script 2025-06-01 03:39:18.907710 | orchestrator | Registering Redlock._extend_script 2025-06-01 03:39:18.907726 | orchestrator | Registering Redlock._release_script 2025-06-01 03:39:18.961186 | orchestrator | 2025-06-01 03:39:18 | INFO  | Task c234c6b7-f405-4c03-a2de-3ae9639ef67a (sync inventory) is running in background. Output coming soon. 2025-06-01 04:39:21.443317 | orchestrator | 2025-06-01 04:39:21 | INFO  | Task f201b55d-0561-4a46-b805-8b054193cf15 (ceph-create-lvm-devices) was prepared for execution. 2025-06-01 04:39:21.443652 | orchestrator | 2025-06-01 04:39:21 | INFO  | It takes a moment until task f201b55d-0561-4a46-b805-8b054193cf15 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-01 04:39:25.601205 | orchestrator | 2025-06-01 04:39:25.604292 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 04:39:25.604336 | orchestrator | 2025-06-01 04:39:25.604354 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 04:39:25.604962 | orchestrator | Sunday 01 June 2025 04:39:25 +0000 (0:00:00.294) 0:00:00.294 *********** 2025-06-01 04:39:25.824389 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 04:39:25.824751 | orchestrator | 2025-06-01 04:39:25.825525 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 04:39:25.826686 | orchestrator | Sunday 01 June 2025 04:39:25 +0000 (0:00:00.226) 0:00:00.521 *********** 2025-06-01 04:39:26.032996 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:26.033092 | orchestrator | 2025-06-01 04:39:26.033108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:26.034503 | orchestrator | Sunday 01 June 2025 04:39:26 +0000 (0:00:00.208) 0:00:00.729 *********** 2025-06-01 04:39:26.412444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-01 04:39:26.413644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-01 04:39:26.416815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-01 04:39:26.416856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-01 04:39:26.417465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-01 04:39:26.418096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-01 04:39:26.418737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-01 04:39:26.419209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-01 04:39:26.419934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-01 04:39:26.420623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-01 04:39:26.421488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-01 04:39:26.422156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-01 04:39:26.422318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-01 04:39:26.422699 | orchestrator | 2025-06-01 04:39:26.423389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:26.423749 | orchestrator | Sunday 01 June 2025 04:39:26 +0000 (0:00:00.379) 0:00:01.109 *********** 2025-06-01 04:39:26.863621 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:26.863975 | orchestrator | 2025-06-01 04:39:26.864821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:26.865582 | orchestrator | Sunday 01 June 2025 04:39:26 +0000 (0:00:00.449) 0:00:01.559 *********** 2025-06-01 04:39:27.048008 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:27.048427 | orchestrator | 2025-06-01 04:39:27.049122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:27.049803 | orchestrator | Sunday 01 June 2025 04:39:27 +0000 (0:00:00.185) 0:00:01.744 *********** 2025-06-01 04:39:27.233821 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:27.233921 | orchestrator | 2025-06-01 04:39:27.233937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:27.233951 | orchestrator | Sunday 01 June 2025 04:39:27 +0000 (0:00:00.184) 0:00:01.929 *********** 2025-06-01 04:39:27.421328 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:27.422267 | orchestrator | 2025-06-01 04:39:27.422824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:27.423292 | orchestrator | Sunday 01 June 2025 04:39:27 +0000 (0:00:00.188) 0:00:02.118 *********** 2025-06-01 04:39:27.614614 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:27.615106 | orchestrator | 2025-06-01 04:39:27.616328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:27.616580 | orchestrator | Sunday 01 June 2025 04:39:27 +0000 (0:00:00.193) 0:00:02.311 *********** 2025-06-01 04:39:27.812199 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:27.812612 | orchestrator | 2025-06-01 04:39:27.813449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:27.814155 | orchestrator | Sunday 01 June 2025 04:39:27 +0000 (0:00:00.196) 0:00:02.508 *********** 2025-06-01 04:39:28.001174 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:28.001606 | orchestrator | 2025-06-01 04:39:28.002687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:28.004791 | orchestrator | Sunday 01 June 2025 04:39:27 +0000 (0:00:00.190) 0:00:02.698 *********** 2025-06-01 04:39:28.185423 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:28.185834 | orchestrator | 2025-06-01 04:39:28.186595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:28.187065 | orchestrator | Sunday 01 June 2025 04:39:28 +0000 (0:00:00.184) 0:00:02.882 *********** 2025-06-01 04:39:28.569081 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b) 2025-06-01 04:39:28.569893 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b) 2025-06-01 04:39:28.571015 | orchestrator | 2025-06-01 04:39:28.571472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:28.572246 | orchestrator | Sunday 01 June 2025 04:39:28 +0000 (0:00:00.383) 0:00:03.266 *********** 2025-06-01 04:39:28.985833 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85) 2025-06-01 04:39:28.987481 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85) 2025-06-01 04:39:28.989096 | orchestrator | 2025-06-01 04:39:28.990430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:28.991483 | orchestrator | Sunday 01 June 2025 04:39:28 +0000 (0:00:00.414) 0:00:03.681 *********** 2025-06-01 04:39:29.596343 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087) 2025-06-01 04:39:29.597336 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087) 2025-06-01 04:39:29.598133 | orchestrator | 2025-06-01 04:39:29.598956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:29.600362 | orchestrator | Sunday 01 June 2025 04:39:29 +0000 (0:00:00.612) 0:00:04.293 *********** 2025-06-01 04:39:30.210386 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9) 2025-06-01 04:39:30.210488 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9) 2025-06-01 04:39:30.211007 | orchestrator | 2025-06-01 04:39:30.211365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:30.211692 | orchestrator | Sunday 01 June 2025 04:39:30 +0000 (0:00:00.613) 0:00:04.906 *********** 2025-06-01 04:39:30.918238 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 04:39:30.919752 | orchestrator | 2025-06-01 04:39:30.920463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:30.923711 | orchestrator | Sunday 01 June 2025 04:39:30 +0000 (0:00:00.707) 0:00:05.614 *********** 2025-06-01 04:39:31.317717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-01 04:39:31.317923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-01 04:39:31.318856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-01 04:39:31.319731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-01 04:39:31.320927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-01 04:39:31.323943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-01 04:39:31.324808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-01 04:39:31.325751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-01 04:39:31.326686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-01 04:39:31.327639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-01 04:39:31.328021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-01 04:39:31.328574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-01 04:39:31.329146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-01 04:39:31.329720 | orchestrator | 2025-06-01 04:39:31.330618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:31.331356 | orchestrator | Sunday 01 June 2025 04:39:31 +0000 (0:00:00.400) 0:00:06.014 *********** 2025-06-01 04:39:31.516034 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:31.517819 | orchestrator | 2025-06-01 04:39:31.521804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:31.521847 | orchestrator | Sunday 01 June 2025 04:39:31 +0000 (0:00:00.193) 0:00:06.208 *********** 2025-06-01 04:39:31.695427 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:31.698688 | orchestrator | 2025-06-01 04:39:31.704837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:31.704870 | orchestrator | Sunday 01 June 2025 04:39:31 +0000 (0:00:00.185) 0:00:06.393 *********** 2025-06-01 04:39:31.900384 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:31.900485 | orchestrator | 2025-06-01 04:39:31.902175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:31.902438 | orchestrator | Sunday 01 June 2025 04:39:31 +0000 (0:00:00.203) 0:00:06.597 *********** 2025-06-01 04:39:32.091998 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:32.093500 | orchestrator | 2025-06-01 04:39:32.094862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:32.096175 | orchestrator | Sunday 01 June 2025 04:39:32 +0000 (0:00:00.191) 0:00:06.788 *********** 2025-06-01 04:39:32.284057 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:32.286693 | orchestrator | 2025-06-01 04:39:32.291725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:32.291752 | orchestrator | Sunday 01 June 2025 04:39:32 +0000 (0:00:00.190) 0:00:06.979 *********** 2025-06-01 04:39:32.472453 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:32.472638 | orchestrator | 2025-06-01 04:39:32.472658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:32.472963 | orchestrator | Sunday 01 June 2025 04:39:32 +0000 (0:00:00.188) 0:00:07.168 *********** 2025-06-01 04:39:32.653717 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:32.657781 | orchestrator | 2025-06-01 04:39:32.657938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:32.657974 | orchestrator | Sunday 01 June 2025 04:39:32 +0000 (0:00:00.183) 0:00:07.351 *********** 2025-06-01 04:39:32.846853 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:32.848017 | orchestrator | 2025-06-01 04:39:32.848590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:32.849337 | orchestrator | Sunday 01 June 2025 04:39:32 +0000 (0:00:00.191) 0:00:07.543 *********** 2025-06-01 04:39:33.929444 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-01 04:39:33.929588 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-01 04:39:33.929605 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-01 04:39:33.929701 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-01 04:39:33.930430 | orchestrator | 2025-06-01 04:39:33.931340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:33.932256 | orchestrator | Sunday 01 June 2025 04:39:33 +0000 (0:00:01.076) 0:00:08.619 *********** 2025-06-01 04:39:34.124171 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:34.124270 | orchestrator | 2025-06-01 04:39:34.124920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:34.127332 | orchestrator | Sunday 01 June 2025 04:39:34 +0000 (0:00:00.200) 0:00:08.820 *********** 2025-06-01 04:39:34.307919 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:34.308019 | orchestrator | 2025-06-01 04:39:34.308816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:34.309653 | orchestrator | Sunday 01 June 2025 04:39:34 +0000 (0:00:00.184) 0:00:09.005 *********** 2025-06-01 04:39:34.504769 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:34.508415 | orchestrator | 2025-06-01 04:39:34.512910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:34.516470 | orchestrator | Sunday 01 June 2025 04:39:34 +0000 (0:00:00.197) 0:00:09.202 *********** 2025-06-01 04:39:34.693436 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:34.695120 | orchestrator | 2025-06-01 04:39:34.696683 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 04:39:34.697249 | orchestrator | Sunday 01 June 2025 04:39:34 +0000 (0:00:00.188) 0:00:09.390 *********** 2025-06-01 04:39:34.827175 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:34.829242 | orchestrator | 2025-06-01 04:39:34.829763 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 04:39:34.831759 | orchestrator | Sunday 01 June 2025 04:39:34 +0000 (0:00:00.133) 0:00:09.524 *********** 2025-06-01 04:39:35.016751 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '24633ad7-3e48-5d36-bc1c-15adae99ed01'}}) 2025-06-01 04:39:35.017464 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2a6257e3-2619-5e00-b9d8-6074ce245854'}}) 2025-06-01 04:39:35.017577 | orchestrator | 2025-06-01 04:39:35.017934 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 04:39:35.018620 | orchestrator | Sunday 01 June 2025 04:39:35 +0000 (0:00:00.189) 0:00:09.713 *********** 2025-06-01 04:39:37.224119 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'}) 2025-06-01 04:39:37.224333 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'}) 2025-06-01 04:39:37.224890 | orchestrator | 2025-06-01 04:39:37.225888 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 04:39:37.226759 | orchestrator | Sunday 01 June 2025 04:39:37 +0000 (0:00:02.207) 0:00:11.921 *********** 2025-06-01 04:39:37.387400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:37.387606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:37.388264 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:37.389446 | orchestrator | 2025-06-01 04:39:37.390233 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 04:39:37.390751 | orchestrator | Sunday 01 June 2025 04:39:37 +0000 (0:00:00.161) 0:00:12.083 *********** 2025-06-01 04:39:38.788263 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'}) 2025-06-01 04:39:38.789748 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'}) 2025-06-01 04:39:38.790297 | orchestrator | 2025-06-01 04:39:38.790786 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 04:39:38.791286 | orchestrator | Sunday 01 June 2025 04:39:38 +0000 (0:00:01.401) 0:00:13.484 *********** 2025-06-01 04:39:38.945446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:38.947481 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:38.947976 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:38.949166 | orchestrator | 2025-06-01 04:39:38.950112 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 04:39:38.951029 | orchestrator | Sunday 01 June 2025 04:39:38 +0000 (0:00:00.155) 0:00:13.639 *********** 2025-06-01 04:39:39.085991 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:39.086127 | orchestrator | 2025-06-01 04:39:39.086139 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 04:39:39.086852 | orchestrator | Sunday 01 June 2025 04:39:39 +0000 (0:00:00.141) 0:00:13.781 *********** 2025-06-01 04:39:39.434269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:39.434372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:39.434725 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:39.435595 | orchestrator | 2025-06-01 04:39:39.436462 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 04:39:39.436878 | orchestrator | Sunday 01 June 2025 04:39:39 +0000 (0:00:00.349) 0:00:14.131 *********** 2025-06-01 04:39:39.575910 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:39.577492 | orchestrator | 2025-06-01 04:39:39.578474 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 04:39:39.579377 | orchestrator | Sunday 01 June 2025 04:39:39 +0000 (0:00:00.142) 0:00:14.273 *********** 2025-06-01 04:39:39.728619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:39.731712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:39.731742 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:39.732107 | orchestrator | 2025-06-01 04:39:39.733446 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 04:39:39.734225 | orchestrator | Sunday 01 June 2025 04:39:39 +0000 (0:00:00.150) 0:00:14.423 *********** 2025-06-01 04:39:39.851564 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:39.851648 | orchestrator | 2025-06-01 04:39:39.852230 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 04:39:39.853234 | orchestrator | Sunday 01 June 2025 04:39:39 +0000 (0:00:00.122) 0:00:14.546 *********** 2025-06-01 04:39:39.999689 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:40.000547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:40.004267 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:40.004294 | orchestrator | 2025-06-01 04:39:40.004307 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 04:39:40.004320 | orchestrator | Sunday 01 June 2025 04:39:39 +0000 (0:00:00.149) 0:00:14.695 *********** 2025-06-01 04:39:40.132466 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:40.138272 | orchestrator | 2025-06-01 04:39:40.138327 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 04:39:40.138341 | orchestrator | Sunday 01 June 2025 04:39:40 +0000 (0:00:00.133) 0:00:14.829 *********** 2025-06-01 04:39:40.291568 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:40.293027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:40.295069 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:40.296372 | orchestrator | 2025-06-01 04:39:40.296594 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 04:39:40.297172 | orchestrator | Sunday 01 June 2025 04:39:40 +0000 (0:00:00.159) 0:00:14.988 *********** 2025-06-01 04:39:40.445090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:40.445948 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:40.446818 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:40.447505 | orchestrator | 2025-06-01 04:39:40.447998 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 04:39:40.448560 | orchestrator | Sunday 01 June 2025 04:39:40 +0000 (0:00:00.154) 0:00:15.142 *********** 2025-06-01 04:39:40.617611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:40.618865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:40.620024 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:40.623134 | orchestrator | 2025-06-01 04:39:40.623163 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 04:39:40.623177 | orchestrator | Sunday 01 June 2025 04:39:40 +0000 (0:00:00.172) 0:00:15.315 *********** 2025-06-01 04:39:40.751162 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:40.752624 | orchestrator | 2025-06-01 04:39:40.758922 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 04:39:40.758959 | orchestrator | Sunday 01 June 2025 04:39:40 +0000 (0:00:00.132) 0:00:15.447 *********** 2025-06-01 04:39:40.885251 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:40.886624 | orchestrator | 2025-06-01 04:39:40.889133 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 04:39:40.889164 | orchestrator | Sunday 01 June 2025 04:39:40 +0000 (0:00:00.134) 0:00:15.582 *********** 2025-06-01 04:39:41.017331 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:41.018139 | orchestrator | 2025-06-01 04:39:41.021922 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 04:39:41.022941 | orchestrator | Sunday 01 June 2025 04:39:41 +0000 (0:00:00.129) 0:00:15.712 *********** 2025-06-01 04:39:41.340010 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 04:39:41.341046 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 04:39:41.343024 | orchestrator | } 2025-06-01 04:39:41.345929 | orchestrator | 2025-06-01 04:39:41.346860 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 04:39:41.347691 | orchestrator | Sunday 01 June 2025 04:39:41 +0000 (0:00:00.324) 0:00:16.036 *********** 2025-06-01 04:39:41.492098 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 04:39:41.493077 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 04:39:41.494093 | orchestrator | } 2025-06-01 04:39:41.497636 | orchestrator | 2025-06-01 04:39:41.497795 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 04:39:41.498937 | orchestrator | Sunday 01 June 2025 04:39:41 +0000 (0:00:00.151) 0:00:16.188 *********** 2025-06-01 04:39:41.642417 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 04:39:41.643611 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 04:39:41.648012 | orchestrator | } 2025-06-01 04:39:41.649317 | orchestrator | 2025-06-01 04:39:41.650965 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 04:39:41.651334 | orchestrator | Sunday 01 June 2025 04:39:41 +0000 (0:00:00.149) 0:00:16.338 *********** 2025-06-01 04:39:42.273366 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:42.275093 | orchestrator | 2025-06-01 04:39:42.276684 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 04:39:42.277586 | orchestrator | Sunday 01 June 2025 04:39:42 +0000 (0:00:00.630) 0:00:16.968 *********** 2025-06-01 04:39:42.777836 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:42.781216 | orchestrator | 2025-06-01 04:39:42.781262 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 04:39:42.781285 | orchestrator | Sunday 01 June 2025 04:39:42 +0000 (0:00:00.502) 0:00:17.471 *********** 2025-06-01 04:39:43.276742 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:43.277583 | orchestrator | 2025-06-01 04:39:43.278918 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 04:39:43.279370 | orchestrator | Sunday 01 June 2025 04:39:43 +0000 (0:00:00.501) 0:00:17.973 *********** 2025-06-01 04:39:43.423146 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:43.424717 | orchestrator | 2025-06-01 04:39:43.428089 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 04:39:43.428135 | orchestrator | Sunday 01 June 2025 04:39:43 +0000 (0:00:00.145) 0:00:18.119 *********** 2025-06-01 04:39:43.541808 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:43.545323 | orchestrator | 2025-06-01 04:39:43.546438 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 04:39:43.547828 | orchestrator | Sunday 01 June 2025 04:39:43 +0000 (0:00:00.115) 0:00:18.234 *********** 2025-06-01 04:39:43.639418 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:43.640551 | orchestrator | 2025-06-01 04:39:43.641889 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 04:39:43.645455 | orchestrator | Sunday 01 June 2025 04:39:43 +0000 (0:00:00.099) 0:00:18.334 *********** 2025-06-01 04:39:43.781145 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 04:39:43.782079 | orchestrator |  "vgs_report": { 2025-06-01 04:39:43.783309 | orchestrator |  "vg": [] 2025-06-01 04:39:43.788042 | orchestrator |  } 2025-06-01 04:39:43.788407 | orchestrator | } 2025-06-01 04:39:43.789710 | orchestrator | 2025-06-01 04:39:43.789833 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 04:39:43.790762 | orchestrator | Sunday 01 June 2025 04:39:43 +0000 (0:00:00.143) 0:00:18.478 *********** 2025-06-01 04:39:43.911136 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:43.912818 | orchestrator | 2025-06-01 04:39:43.913982 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 04:39:43.914982 | orchestrator | Sunday 01 June 2025 04:39:43 +0000 (0:00:00.129) 0:00:18.607 *********** 2025-06-01 04:39:44.042795 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:44.046689 | orchestrator | 2025-06-01 04:39:44.048057 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 04:39:44.049365 | orchestrator | Sunday 01 June 2025 04:39:44 +0000 (0:00:00.129) 0:00:18.737 *********** 2025-06-01 04:39:44.373024 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:44.375044 | orchestrator | 2025-06-01 04:39:44.376320 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 04:39:44.377201 | orchestrator | Sunday 01 June 2025 04:39:44 +0000 (0:00:00.332) 0:00:19.070 *********** 2025-06-01 04:39:44.511314 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:44.512638 | orchestrator | 2025-06-01 04:39:44.516044 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 04:39:44.517750 | orchestrator | Sunday 01 June 2025 04:39:44 +0000 (0:00:00.136) 0:00:19.206 *********** 2025-06-01 04:39:44.656385 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:44.657892 | orchestrator | 2025-06-01 04:39:44.659768 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 04:39:44.661161 | orchestrator | Sunday 01 June 2025 04:39:44 +0000 (0:00:00.146) 0:00:19.353 *********** 2025-06-01 04:39:44.790403 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:44.791734 | orchestrator | 2025-06-01 04:39:44.795204 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 04:39:44.795252 | orchestrator | Sunday 01 June 2025 04:39:44 +0000 (0:00:00.133) 0:00:19.486 *********** 2025-06-01 04:39:44.918129 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:44.920122 | orchestrator | 2025-06-01 04:39:44.921490 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 04:39:44.923345 | orchestrator | Sunday 01 June 2025 04:39:44 +0000 (0:00:00.128) 0:00:19.614 *********** 2025-06-01 04:39:45.041194 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.042968 | orchestrator | 2025-06-01 04:39:45.043502 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 04:39:45.044689 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.121) 0:00:19.736 *********** 2025-06-01 04:39:45.158001 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.159844 | orchestrator | 2025-06-01 04:39:45.160773 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 04:39:45.161589 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.118) 0:00:19.854 *********** 2025-06-01 04:39:45.283055 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.284572 | orchestrator | 2025-06-01 04:39:45.285790 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 04:39:45.287219 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.125) 0:00:19.980 *********** 2025-06-01 04:39:45.390972 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.391156 | orchestrator | 2025-06-01 04:39:45.391694 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 04:39:45.392812 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.109) 0:00:20.089 *********** 2025-06-01 04:39:45.504946 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.505902 | orchestrator | 2025-06-01 04:39:45.506904 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 04:39:45.508668 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.113) 0:00:20.202 *********** 2025-06-01 04:39:45.623388 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.624065 | orchestrator | 2025-06-01 04:39:45.624823 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 04:39:45.628028 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.118) 0:00:20.321 *********** 2025-06-01 04:39:45.746943 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.747557 | orchestrator | 2025-06-01 04:39:45.748784 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 04:39:45.749871 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.123) 0:00:20.444 *********** 2025-06-01 04:39:45.890974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:45.892094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:45.893107 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:45.893776 | orchestrator | 2025-06-01 04:39:45.894816 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 04:39:45.895604 | orchestrator | Sunday 01 June 2025 04:39:45 +0000 (0:00:00.144) 0:00:20.588 *********** 2025-06-01 04:39:46.153458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.153616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.154216 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.155977 | orchestrator | 2025-06-01 04:39:46.157196 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 04:39:46.157841 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.260) 0:00:20.849 *********** 2025-06-01 04:39:46.277345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.277497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.278534 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.278942 | orchestrator | 2025-06-01 04:39:46.279310 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 04:39:46.280676 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.124) 0:00:20.974 *********** 2025-06-01 04:39:46.423309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.423767 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.424704 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.425002 | orchestrator | 2025-06-01 04:39:46.426129 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 04:39:46.427176 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.145) 0:00:21.119 *********** 2025-06-01 04:39:46.565408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.566222 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.569479 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.570000 | orchestrator | 2025-06-01 04:39:46.570310 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 04:39:46.570606 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.143) 0:00:21.262 *********** 2025-06-01 04:39:46.705093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.705756 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.710812 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.710870 | orchestrator | 2025-06-01 04:39:46.711463 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 04:39:46.711488 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.139) 0:00:21.402 *********** 2025-06-01 04:39:46.846326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.847239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.853468 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.854364 | orchestrator | 2025-06-01 04:39:46.855645 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 04:39:46.855867 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.141) 0:00:21.544 *********** 2025-06-01 04:39:46.984048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:46.985357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:46.986794 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:46.987807 | orchestrator | 2025-06-01 04:39:46.988462 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 04:39:46.989571 | orchestrator | Sunday 01 June 2025 04:39:46 +0000 (0:00:00.137) 0:00:21.681 *********** 2025-06-01 04:39:47.437548 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:47.438956 | orchestrator | 2025-06-01 04:39:47.439372 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 04:39:47.440421 | orchestrator | Sunday 01 June 2025 04:39:47 +0000 (0:00:00.452) 0:00:22.134 *********** 2025-06-01 04:39:47.916664 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:47.919151 | orchestrator | 2025-06-01 04:39:47.919248 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 04:39:47.919416 | orchestrator | Sunday 01 June 2025 04:39:47 +0000 (0:00:00.479) 0:00:22.614 *********** 2025-06-01 04:39:48.045568 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:39:48.045664 | orchestrator | 2025-06-01 04:39:48.045679 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 04:39:48.045692 | orchestrator | Sunday 01 June 2025 04:39:48 +0000 (0:00:00.128) 0:00:22.743 *********** 2025-06-01 04:39:48.210669 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'vg_name': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'}) 2025-06-01 04:39:48.212271 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'vg_name': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'}) 2025-06-01 04:39:48.213850 | orchestrator | 2025-06-01 04:39:48.214962 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 04:39:48.216172 | orchestrator | Sunday 01 June 2025 04:39:48 +0000 (0:00:00.163) 0:00:22.907 *********** 2025-06-01 04:39:48.341935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:48.343949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:48.344973 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:48.346097 | orchestrator | 2025-06-01 04:39:48.346886 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 04:39:48.347722 | orchestrator | Sunday 01 June 2025 04:39:48 +0000 (0:00:00.131) 0:00:23.038 *********** 2025-06-01 04:39:48.584891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:48.584982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:48.584996 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:48.585009 | orchestrator | 2025-06-01 04:39:48.585021 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 04:39:48.585034 | orchestrator | Sunday 01 June 2025 04:39:48 +0000 (0:00:00.242) 0:00:23.281 *********** 2025-06-01 04:39:48.723162 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'})  2025-06-01 04:39:48.723968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'})  2025-06-01 04:39:48.724842 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:39:48.725428 | orchestrator | 2025-06-01 04:39:48.726342 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 04:39:48.726973 | orchestrator | Sunday 01 June 2025 04:39:48 +0000 (0:00:00.139) 0:00:23.420 *********** 2025-06-01 04:39:48.986939 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 04:39:48.989260 | orchestrator |  "lvm_report": { 2025-06-01 04:39:48.990060 | orchestrator |  "lv": [ 2025-06-01 04:39:48.990690 | orchestrator |  { 2025-06-01 04:39:48.991839 | orchestrator |  "lv_name": "osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01", 2025-06-01 04:39:48.992855 | orchestrator |  "vg_name": "ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01" 2025-06-01 04:39:48.993307 | orchestrator |  }, 2025-06-01 04:39:48.993993 | orchestrator |  { 2025-06-01 04:39:48.994738 | orchestrator |  "lv_name": "osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854", 2025-06-01 04:39:48.995323 | orchestrator |  "vg_name": "ceph-2a6257e3-2619-5e00-b9d8-6074ce245854" 2025-06-01 04:39:48.996072 | orchestrator |  } 2025-06-01 04:39:48.996447 | orchestrator |  ], 2025-06-01 04:39:48.997000 | orchestrator |  "pv": [ 2025-06-01 04:39:48.997647 | orchestrator |  { 2025-06-01 04:39:48.998202 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 04:39:48.998579 | orchestrator |  "vg_name": "ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01" 2025-06-01 04:39:48.998929 | orchestrator |  }, 2025-06-01 04:39:48.999408 | orchestrator |  { 2025-06-01 04:39:49.000029 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 04:39:49.000302 | orchestrator |  "vg_name": "ceph-2a6257e3-2619-5e00-b9d8-6074ce245854" 2025-06-01 04:39:49.000701 | orchestrator |  } 2025-06-01 04:39:49.001100 | orchestrator |  ] 2025-06-01 04:39:49.001490 | orchestrator |  } 2025-06-01 04:39:49.001975 | orchestrator | } 2025-06-01 04:39:49.002450 | orchestrator | 2025-06-01 04:39:49.002930 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 04:39:49.003861 | orchestrator | 2025-06-01 04:39:49.004923 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 04:39:49.005812 | orchestrator | Sunday 01 June 2025 04:39:48 +0000 (0:00:00.264) 0:00:23.684 *********** 2025-06-01 04:39:49.208462 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 04:39:49.208638 | orchestrator | 2025-06-01 04:39:49.208748 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 04:39:49.209107 | orchestrator | Sunday 01 June 2025 04:39:49 +0000 (0:00:00.219) 0:00:23.904 *********** 2025-06-01 04:39:49.445101 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:39:49.445877 | orchestrator | 2025-06-01 04:39:49.446882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:49.447823 | orchestrator | Sunday 01 June 2025 04:39:49 +0000 (0:00:00.237) 0:00:24.142 *********** 2025-06-01 04:39:49.867286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-01 04:39:49.867568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-01 04:39:49.869326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-01 04:39:49.869714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-01 04:39:49.871310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-01 04:39:49.872052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-01 04:39:49.872614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-01 04:39:49.873062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-01 04:39:49.873666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-01 04:39:49.874186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-01 04:39:49.874879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-01 04:39:49.875317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-01 04:39:49.875823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-01 04:39:49.876342 | orchestrator | 2025-06-01 04:39:49.876887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:49.877361 | orchestrator | Sunday 01 June 2025 04:39:49 +0000 (0:00:00.416) 0:00:24.559 *********** 2025-06-01 04:39:50.050195 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:50.051003 | orchestrator | 2025-06-01 04:39:50.051746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:50.052231 | orchestrator | Sunday 01 June 2025 04:39:50 +0000 (0:00:00.188) 0:00:24.747 *********** 2025-06-01 04:39:50.243643 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:50.244623 | orchestrator | 2025-06-01 04:39:50.244712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:50.246088 | orchestrator | Sunday 01 June 2025 04:39:50 +0000 (0:00:00.191) 0:00:24.939 *********** 2025-06-01 04:39:50.440122 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:50.443449 | orchestrator | 2025-06-01 04:39:50.443606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:50.444629 | orchestrator | Sunday 01 June 2025 04:39:50 +0000 (0:00:00.193) 0:00:25.132 *********** 2025-06-01 04:39:51.037229 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:51.038814 | orchestrator | 2025-06-01 04:39:51.040984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:51.042146 | orchestrator | Sunday 01 June 2025 04:39:51 +0000 (0:00:00.601) 0:00:25.734 *********** 2025-06-01 04:39:51.242709 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:51.243123 | orchestrator | 2025-06-01 04:39:51.244087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:51.244876 | orchestrator | Sunday 01 June 2025 04:39:51 +0000 (0:00:00.204) 0:00:25.938 *********** 2025-06-01 04:39:51.431701 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:51.432663 | orchestrator | 2025-06-01 04:39:51.433437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:51.433927 | orchestrator | Sunday 01 June 2025 04:39:51 +0000 (0:00:00.189) 0:00:26.128 *********** 2025-06-01 04:39:51.626498 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:51.626712 | orchestrator | 2025-06-01 04:39:51.626802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:51.628064 | orchestrator | Sunday 01 June 2025 04:39:51 +0000 (0:00:00.195) 0:00:26.323 *********** 2025-06-01 04:39:51.848876 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:51.848984 | orchestrator | 2025-06-01 04:39:51.849002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:51.849015 | orchestrator | Sunday 01 June 2025 04:39:51 +0000 (0:00:00.222) 0:00:26.545 *********** 2025-06-01 04:39:52.264742 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182) 2025-06-01 04:39:52.265185 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182) 2025-06-01 04:39:52.266259 | orchestrator | 2025-06-01 04:39:52.266934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:52.268111 | orchestrator | Sunday 01 June 2025 04:39:52 +0000 (0:00:00.414) 0:00:26.960 *********** 2025-06-01 04:39:52.669851 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c) 2025-06-01 04:39:52.671565 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c) 2025-06-01 04:39:52.672914 | orchestrator | 2025-06-01 04:39:52.673594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:52.674847 | orchestrator | Sunday 01 June 2025 04:39:52 +0000 (0:00:00.406) 0:00:27.366 *********** 2025-06-01 04:39:53.094402 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79) 2025-06-01 04:39:53.094616 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79) 2025-06-01 04:39:53.095863 | orchestrator | 2025-06-01 04:39:53.096795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:53.098392 | orchestrator | Sunday 01 June 2025 04:39:53 +0000 (0:00:00.424) 0:00:27.791 *********** 2025-06-01 04:39:53.516609 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110) 2025-06-01 04:39:53.516819 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110) 2025-06-01 04:39:53.517592 | orchestrator | 2025-06-01 04:39:53.517967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:39:53.518833 | orchestrator | Sunday 01 June 2025 04:39:53 +0000 (0:00:00.420) 0:00:28.212 *********** 2025-06-01 04:39:53.852360 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 04:39:53.852614 | orchestrator | 2025-06-01 04:39:53.853122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:53.853428 | orchestrator | Sunday 01 June 2025 04:39:53 +0000 (0:00:00.337) 0:00:28.549 *********** 2025-06-01 04:39:54.437681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-01 04:39:54.438198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-01 04:39:54.439417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-01 04:39:54.440568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-01 04:39:54.440594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-01 04:39:54.441767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-01 04:39:54.442714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-01 04:39:54.443316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-01 04:39:54.444640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-01 04:39:54.445621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-01 04:39:54.445942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-01 04:39:54.447013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-01 04:39:54.447580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-01 04:39:54.448313 | orchestrator | 2025-06-01 04:39:54.448947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:54.449747 | orchestrator | Sunday 01 June 2025 04:39:54 +0000 (0:00:00.583) 0:00:29.133 *********** 2025-06-01 04:39:54.642952 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:54.643060 | orchestrator | 2025-06-01 04:39:54.643165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:54.643726 | orchestrator | Sunday 01 June 2025 04:39:54 +0000 (0:00:00.206) 0:00:29.339 *********** 2025-06-01 04:39:54.846719 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:54.847019 | orchestrator | 2025-06-01 04:39:54.847893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:54.848405 | orchestrator | Sunday 01 June 2025 04:39:54 +0000 (0:00:00.204) 0:00:29.543 *********** 2025-06-01 04:39:55.063752 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:55.063932 | orchestrator | 2025-06-01 04:39:55.064067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:55.064584 | orchestrator | Sunday 01 June 2025 04:39:55 +0000 (0:00:00.217) 0:00:29.761 *********** 2025-06-01 04:39:55.261804 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:55.263015 | orchestrator | 2025-06-01 04:39:55.264043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:55.265792 | orchestrator | Sunday 01 June 2025 04:39:55 +0000 (0:00:00.196) 0:00:29.957 *********** 2025-06-01 04:39:55.456391 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:55.457081 | orchestrator | 2025-06-01 04:39:55.458419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:55.459595 | orchestrator | Sunday 01 June 2025 04:39:55 +0000 (0:00:00.195) 0:00:30.153 *********** 2025-06-01 04:39:55.668569 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:55.669463 | orchestrator | 2025-06-01 04:39:55.670144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:55.671162 | orchestrator | Sunday 01 June 2025 04:39:55 +0000 (0:00:00.212) 0:00:30.365 *********** 2025-06-01 04:39:55.869549 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:55.869765 | orchestrator | 2025-06-01 04:39:55.870948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:55.871581 | orchestrator | Sunday 01 June 2025 04:39:55 +0000 (0:00:00.200) 0:00:30.566 *********** 2025-06-01 04:39:56.088921 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:56.089443 | orchestrator | 2025-06-01 04:39:56.090308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:56.091213 | orchestrator | Sunday 01 June 2025 04:39:56 +0000 (0:00:00.219) 0:00:30.786 *********** 2025-06-01 04:39:56.907229 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-01 04:39:56.907808 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-01 04:39:56.909571 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-01 04:39:56.909627 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-01 04:39:56.909885 | orchestrator | 2025-06-01 04:39:56.910562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:56.911071 | orchestrator | Sunday 01 June 2025 04:39:56 +0000 (0:00:00.816) 0:00:31.602 *********** 2025-06-01 04:39:57.107015 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:57.107251 | orchestrator | 2025-06-01 04:39:57.108148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:57.109010 | orchestrator | Sunday 01 June 2025 04:39:57 +0000 (0:00:00.201) 0:00:31.803 *********** 2025-06-01 04:39:57.299018 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:57.299119 | orchestrator | 2025-06-01 04:39:57.300021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:57.301216 | orchestrator | Sunday 01 June 2025 04:39:57 +0000 (0:00:00.190) 0:00:31.994 *********** 2025-06-01 04:39:57.908742 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:57.909276 | orchestrator | 2025-06-01 04:39:57.909945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:39:57.910943 | orchestrator | Sunday 01 June 2025 04:39:57 +0000 (0:00:00.610) 0:00:32.605 *********** 2025-06-01 04:39:58.108429 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:58.109082 | orchestrator | 2025-06-01 04:39:58.109794 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 04:39:58.110887 | orchestrator | Sunday 01 June 2025 04:39:58 +0000 (0:00:00.200) 0:00:32.805 *********** 2025-06-01 04:39:58.243729 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:39:58.244242 | orchestrator | 2025-06-01 04:39:58.244861 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 04:39:58.245660 | orchestrator | Sunday 01 June 2025 04:39:58 +0000 (0:00:00.134) 0:00:32.940 *********** 2025-06-01 04:39:58.433818 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'baa7c707-8012-580f-8c9e-09def35a523c'}}) 2025-06-01 04:39:58.433937 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f9d798-cc3d-57c0-9350-8228d94606be'}}) 2025-06-01 04:39:58.434373 | orchestrator | 2025-06-01 04:39:58.434697 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 04:39:58.435038 | orchestrator | Sunday 01 June 2025 04:39:58 +0000 (0:00:00.190) 0:00:33.131 *********** 2025-06-01 04:40:00.541255 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'}) 2025-06-01 04:40:00.541363 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'}) 2025-06-01 04:40:00.542337 | orchestrator | 2025-06-01 04:40:00.543980 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 04:40:00.545418 | orchestrator | Sunday 01 June 2025 04:40:00 +0000 (0:00:02.104) 0:00:35.235 *********** 2025-06-01 04:40:00.725076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:00.725844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:00.726265 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:00.726663 | orchestrator | 2025-06-01 04:40:00.727869 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 04:40:00.728798 | orchestrator | Sunday 01 June 2025 04:40:00 +0000 (0:00:00.185) 0:00:35.421 *********** 2025-06-01 04:40:01.998445 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'}) 2025-06-01 04:40:02.000269 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'}) 2025-06-01 04:40:02.001216 | orchestrator | 2025-06-01 04:40:02.002364 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 04:40:02.003421 | orchestrator | Sunday 01 June 2025 04:40:01 +0000 (0:00:01.272) 0:00:36.693 *********** 2025-06-01 04:40:02.170738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:02.171155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:02.172239 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:02.173324 | orchestrator | 2025-06-01 04:40:02.174357 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 04:40:02.175326 | orchestrator | Sunday 01 June 2025 04:40:02 +0000 (0:00:00.173) 0:00:36.866 *********** 2025-06-01 04:40:02.305787 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:02.306413 | orchestrator | 2025-06-01 04:40:02.307237 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 04:40:02.307922 | orchestrator | Sunday 01 June 2025 04:40:02 +0000 (0:00:00.135) 0:00:37.002 *********** 2025-06-01 04:40:02.452629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:02.456296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:02.456337 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:02.456353 | orchestrator | 2025-06-01 04:40:02.456495 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 04:40:02.457224 | orchestrator | Sunday 01 June 2025 04:40:02 +0000 (0:00:00.147) 0:00:37.149 *********** 2025-06-01 04:40:02.597734 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:02.597849 | orchestrator | 2025-06-01 04:40:02.597951 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 04:40:02.598656 | orchestrator | Sunday 01 June 2025 04:40:02 +0000 (0:00:00.140) 0:00:37.290 *********** 2025-06-01 04:40:02.730306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:02.730639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:02.731285 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:02.732222 | orchestrator | 2025-06-01 04:40:02.733852 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 04:40:02.733875 | orchestrator | Sunday 01 June 2025 04:40:02 +0000 (0:00:00.136) 0:00:37.427 *********** 2025-06-01 04:40:03.066691 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:03.067372 | orchestrator | 2025-06-01 04:40:03.067900 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 04:40:03.068799 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.337) 0:00:37.764 *********** 2025-06-01 04:40:03.227496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:03.227736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:03.228704 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:03.229557 | orchestrator | 2025-06-01 04:40:03.230336 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 04:40:03.231103 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.160) 0:00:37.924 *********** 2025-06-01 04:40:03.370236 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:03.370458 | orchestrator | 2025-06-01 04:40:03.372013 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 04:40:03.372174 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.142) 0:00:38.067 *********** 2025-06-01 04:40:03.530461 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:03.531094 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:03.531980 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:03.532576 | orchestrator | 2025-06-01 04:40:03.533038 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 04:40:03.533830 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.159) 0:00:38.227 *********** 2025-06-01 04:40:03.686675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:03.688816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:03.688894 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:03.689770 | orchestrator | 2025-06-01 04:40:03.690607 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 04:40:03.691391 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.154) 0:00:38.381 *********** 2025-06-01 04:40:03.833601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:03.833720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:03.833902 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:03.834856 | orchestrator | 2025-06-01 04:40:03.835795 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 04:40:03.836791 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.147) 0:00:38.529 *********** 2025-06-01 04:40:03.972302 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:03.973096 | orchestrator | 2025-06-01 04:40:03.973811 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 04:40:03.973924 | orchestrator | Sunday 01 June 2025 04:40:03 +0000 (0:00:00.138) 0:00:38.668 *********** 2025-06-01 04:40:04.106910 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:04.107029 | orchestrator | 2025-06-01 04:40:04.107079 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 04:40:04.107104 | orchestrator | Sunday 01 June 2025 04:40:04 +0000 (0:00:00.132) 0:00:38.801 *********** 2025-06-01 04:40:04.229091 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:04.230088 | orchestrator | 2025-06-01 04:40:04.230918 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 04:40:04.231621 | orchestrator | Sunday 01 June 2025 04:40:04 +0000 (0:00:00.124) 0:00:38.925 *********** 2025-06-01 04:40:04.367089 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 04:40:04.368280 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 04:40:04.371028 | orchestrator | } 2025-06-01 04:40:04.372273 | orchestrator | 2025-06-01 04:40:04.373501 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 04:40:04.374574 | orchestrator | Sunday 01 June 2025 04:40:04 +0000 (0:00:00.138) 0:00:39.063 *********** 2025-06-01 04:40:04.505636 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 04:40:04.506868 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 04:40:04.507828 | orchestrator | } 2025-06-01 04:40:04.509836 | orchestrator | 2025-06-01 04:40:04.510503 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 04:40:04.511283 | orchestrator | Sunday 01 June 2025 04:40:04 +0000 (0:00:00.138) 0:00:39.202 *********** 2025-06-01 04:40:04.649361 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 04:40:04.649975 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 04:40:04.651492 | orchestrator | } 2025-06-01 04:40:04.653333 | orchestrator | 2025-06-01 04:40:04.654314 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 04:40:04.654817 | orchestrator | Sunday 01 June 2025 04:40:04 +0000 (0:00:00.143) 0:00:39.346 *********** 2025-06-01 04:40:05.359492 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:05.360048 | orchestrator | 2025-06-01 04:40:05.361841 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 04:40:05.363811 | orchestrator | Sunday 01 June 2025 04:40:05 +0000 (0:00:00.709) 0:00:40.055 *********** 2025-06-01 04:40:05.876683 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:05.876787 | orchestrator | 2025-06-01 04:40:05.876804 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 04:40:05.876879 | orchestrator | Sunday 01 June 2025 04:40:05 +0000 (0:00:00.515) 0:00:40.571 *********** 2025-06-01 04:40:06.390706 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:06.391339 | orchestrator | 2025-06-01 04:40:06.392127 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 04:40:06.392998 | orchestrator | Sunday 01 June 2025 04:40:06 +0000 (0:00:00.515) 0:00:41.087 *********** 2025-06-01 04:40:06.537464 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:06.538438 | orchestrator | 2025-06-01 04:40:06.539470 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 04:40:06.541362 | orchestrator | Sunday 01 June 2025 04:40:06 +0000 (0:00:00.147) 0:00:41.234 *********** 2025-06-01 04:40:06.654404 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:06.654634 | orchestrator | 2025-06-01 04:40:06.656108 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 04:40:06.656733 | orchestrator | Sunday 01 June 2025 04:40:06 +0000 (0:00:00.116) 0:00:41.351 *********** 2025-06-01 04:40:06.772796 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:06.773309 | orchestrator | 2025-06-01 04:40:06.774179 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 04:40:06.775321 | orchestrator | Sunday 01 June 2025 04:40:06 +0000 (0:00:00.117) 0:00:41.468 *********** 2025-06-01 04:40:06.915077 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 04:40:06.915620 | orchestrator |  "vgs_report": { 2025-06-01 04:40:06.916939 | orchestrator |  "vg": [] 2025-06-01 04:40:06.919008 | orchestrator |  } 2025-06-01 04:40:06.919242 | orchestrator | } 2025-06-01 04:40:06.919536 | orchestrator | 2025-06-01 04:40:06.919796 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 04:40:06.920784 | orchestrator | Sunday 01 June 2025 04:40:06 +0000 (0:00:00.141) 0:00:41.610 *********** 2025-06-01 04:40:07.049459 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:07.050444 | orchestrator | 2025-06-01 04:40:07.051267 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 04:40:07.052270 | orchestrator | Sunday 01 June 2025 04:40:07 +0000 (0:00:00.136) 0:00:41.746 *********** 2025-06-01 04:40:07.184127 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:07.184352 | orchestrator | 2025-06-01 04:40:07.185578 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 04:40:07.186195 | orchestrator | Sunday 01 June 2025 04:40:07 +0000 (0:00:00.133) 0:00:41.880 *********** 2025-06-01 04:40:07.323454 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:07.323831 | orchestrator | 2025-06-01 04:40:07.324880 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 04:40:07.325881 | orchestrator | Sunday 01 June 2025 04:40:07 +0000 (0:00:00.139) 0:00:42.020 *********** 2025-06-01 04:40:07.457863 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:07.458815 | orchestrator | 2025-06-01 04:40:07.459734 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 04:40:07.460507 | orchestrator | Sunday 01 June 2025 04:40:07 +0000 (0:00:00.135) 0:00:42.155 *********** 2025-06-01 04:40:07.586324 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:07.587156 | orchestrator | 2025-06-01 04:40:07.588302 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 04:40:07.588997 | orchestrator | Sunday 01 June 2025 04:40:07 +0000 (0:00:00.123) 0:00:42.278 *********** 2025-06-01 04:40:07.919676 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:07.920540 | orchestrator | 2025-06-01 04:40:07.921503 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 04:40:07.922962 | orchestrator | Sunday 01 June 2025 04:40:07 +0000 (0:00:00.335) 0:00:42.614 *********** 2025-06-01 04:40:08.062554 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.063427 | orchestrator | 2025-06-01 04:40:08.063848 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 04:40:08.065211 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.144) 0:00:42.758 *********** 2025-06-01 04:40:08.212865 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.213085 | orchestrator | 2025-06-01 04:40:08.213449 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 04:40:08.213667 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.150) 0:00:42.908 *********** 2025-06-01 04:40:08.358253 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.358934 | orchestrator | 2025-06-01 04:40:08.359098 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 04:40:08.359566 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.146) 0:00:43.055 *********** 2025-06-01 04:40:08.498210 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.500035 | orchestrator | 2025-06-01 04:40:08.500605 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 04:40:08.502136 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.139) 0:00:43.194 *********** 2025-06-01 04:40:08.618374 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.618974 | orchestrator | 2025-06-01 04:40:08.619778 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 04:40:08.620408 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.121) 0:00:43.315 *********** 2025-06-01 04:40:08.763773 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.764700 | orchestrator | 2025-06-01 04:40:08.765711 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 04:40:08.767465 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.144) 0:00:43.460 *********** 2025-06-01 04:40:08.898283 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:08.899383 | orchestrator | 2025-06-01 04:40:08.900331 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 04:40:08.901310 | orchestrator | Sunday 01 June 2025 04:40:08 +0000 (0:00:00.134) 0:00:43.595 *********** 2025-06-01 04:40:09.045909 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:09.047164 | orchestrator | 2025-06-01 04:40:09.048237 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 04:40:09.050002 | orchestrator | Sunday 01 June 2025 04:40:09 +0000 (0:00:00.146) 0:00:43.741 *********** 2025-06-01 04:40:09.192788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:09.193570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:09.194094 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:09.195086 | orchestrator | 2025-06-01 04:40:09.195858 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 04:40:09.196338 | orchestrator | Sunday 01 June 2025 04:40:09 +0000 (0:00:00.148) 0:00:43.889 *********** 2025-06-01 04:40:09.343176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:09.343269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:09.343906 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:09.344712 | orchestrator | 2025-06-01 04:40:09.345486 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 04:40:09.345867 | orchestrator | Sunday 01 June 2025 04:40:09 +0000 (0:00:00.150) 0:00:44.040 *********** 2025-06-01 04:40:09.511467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:09.513091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:09.513161 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:09.513732 | orchestrator | 2025-06-01 04:40:09.515411 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 04:40:09.515791 | orchestrator | Sunday 01 June 2025 04:40:09 +0000 (0:00:00.168) 0:00:44.208 *********** 2025-06-01 04:40:09.857790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:09.858999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:09.859985 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:09.860594 | orchestrator | 2025-06-01 04:40:09.861299 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 04:40:09.861878 | orchestrator | Sunday 01 June 2025 04:40:09 +0000 (0:00:00.344) 0:00:44.552 *********** 2025-06-01 04:40:10.034617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:10.036287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:10.037038 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:10.037480 | orchestrator | 2025-06-01 04:40:10.038862 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 04:40:10.039467 | orchestrator | Sunday 01 June 2025 04:40:10 +0000 (0:00:00.178) 0:00:44.731 *********** 2025-06-01 04:40:10.191837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:10.192895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:10.196065 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:10.196135 | orchestrator | 2025-06-01 04:40:10.196874 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 04:40:10.198141 | orchestrator | Sunday 01 June 2025 04:40:10 +0000 (0:00:00.156) 0:00:44.888 *********** 2025-06-01 04:40:10.346444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:10.347111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:10.347323 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:10.347680 | orchestrator | 2025-06-01 04:40:10.348113 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 04:40:10.348431 | orchestrator | Sunday 01 June 2025 04:40:10 +0000 (0:00:00.155) 0:00:45.044 *********** 2025-06-01 04:40:10.487308 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:10.487399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:10.488044 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:10.488788 | orchestrator | 2025-06-01 04:40:10.489102 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 04:40:10.489913 | orchestrator | Sunday 01 June 2025 04:40:10 +0000 (0:00:00.139) 0:00:45.184 *********** 2025-06-01 04:40:11.078420 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:11.078933 | orchestrator | 2025-06-01 04:40:11.079887 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 04:40:11.080636 | orchestrator | Sunday 01 June 2025 04:40:11 +0000 (0:00:00.589) 0:00:45.773 *********** 2025-06-01 04:40:11.584556 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:11.584736 | orchestrator | 2025-06-01 04:40:11.585712 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 04:40:11.586547 | orchestrator | Sunday 01 June 2025 04:40:11 +0000 (0:00:00.507) 0:00:46.281 *********** 2025-06-01 04:40:11.726785 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:11.727642 | orchestrator | 2025-06-01 04:40:11.728698 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 04:40:11.729936 | orchestrator | Sunday 01 June 2025 04:40:11 +0000 (0:00:00.141) 0:00:46.423 *********** 2025-06-01 04:40:11.900457 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'vg_name': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'}) 2025-06-01 04:40:11.901123 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'vg_name': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'}) 2025-06-01 04:40:11.902502 | orchestrator | 2025-06-01 04:40:11.903622 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 04:40:11.904417 | orchestrator | Sunday 01 June 2025 04:40:11 +0000 (0:00:00.171) 0:00:46.595 *********** 2025-06-01 04:40:12.053757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:12.054690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:12.055490 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:12.056432 | orchestrator | 2025-06-01 04:40:12.057237 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 04:40:12.058142 | orchestrator | Sunday 01 June 2025 04:40:12 +0000 (0:00:00.154) 0:00:46.750 *********** 2025-06-01 04:40:12.207609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:12.208495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:12.209404 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:12.210373 | orchestrator | 2025-06-01 04:40:12.211509 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 04:40:12.212849 | orchestrator | Sunday 01 June 2025 04:40:12 +0000 (0:00:00.153) 0:00:46.904 *********** 2025-06-01 04:40:12.357997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'})  2025-06-01 04:40:12.358464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'})  2025-06-01 04:40:12.358984 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:12.359800 | orchestrator | 2025-06-01 04:40:12.360456 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 04:40:12.362118 | orchestrator | Sunday 01 June 2025 04:40:12 +0000 (0:00:00.150) 0:00:47.055 *********** 2025-06-01 04:40:12.827367 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 04:40:12.827607 | orchestrator |  "lvm_report": { 2025-06-01 04:40:12.828571 | orchestrator |  "lv": [ 2025-06-01 04:40:12.829534 | orchestrator |  { 2025-06-01 04:40:12.831183 | orchestrator |  "lv_name": "osd-block-baa7c707-8012-580f-8c9e-09def35a523c", 2025-06-01 04:40:12.832184 | orchestrator |  "vg_name": "ceph-baa7c707-8012-580f-8c9e-09def35a523c" 2025-06-01 04:40:12.832394 | orchestrator |  }, 2025-06-01 04:40:12.832894 | orchestrator |  { 2025-06-01 04:40:12.833291 | orchestrator |  "lv_name": "osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be", 2025-06-01 04:40:12.833725 | orchestrator |  "vg_name": "ceph-c1f9d798-cc3d-57c0-9350-8228d94606be" 2025-06-01 04:40:12.835585 | orchestrator |  } 2025-06-01 04:40:12.836044 | orchestrator |  ], 2025-06-01 04:40:12.836431 | orchestrator |  "pv": [ 2025-06-01 04:40:12.836913 | orchestrator |  { 2025-06-01 04:40:12.837383 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 04:40:12.837759 | orchestrator |  "vg_name": "ceph-baa7c707-8012-580f-8c9e-09def35a523c" 2025-06-01 04:40:12.838244 | orchestrator |  }, 2025-06-01 04:40:12.838699 | orchestrator |  { 2025-06-01 04:40:12.839049 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 04:40:12.839560 | orchestrator |  "vg_name": "ceph-c1f9d798-cc3d-57c0-9350-8228d94606be" 2025-06-01 04:40:12.840023 | orchestrator |  } 2025-06-01 04:40:12.840481 | orchestrator |  ] 2025-06-01 04:40:12.840787 | orchestrator |  } 2025-06-01 04:40:12.841235 | orchestrator | } 2025-06-01 04:40:12.841686 | orchestrator | 2025-06-01 04:40:12.842201 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 04:40:12.842501 | orchestrator | 2025-06-01 04:40:12.843038 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 04:40:12.843569 | orchestrator | Sunday 01 June 2025 04:40:12 +0000 (0:00:00.468) 0:00:47.523 *********** 2025-06-01 04:40:13.055822 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 04:40:13.056602 | orchestrator | 2025-06-01 04:40:13.057239 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 04:40:13.058278 | orchestrator | Sunday 01 June 2025 04:40:13 +0000 (0:00:00.228) 0:00:47.752 *********** 2025-06-01 04:40:13.293328 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:13.295265 | orchestrator | 2025-06-01 04:40:13.295312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:13.295326 | orchestrator | Sunday 01 June 2025 04:40:13 +0000 (0:00:00.237) 0:00:47.990 *********** 2025-06-01 04:40:13.691290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-01 04:40:13.692138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-01 04:40:13.693185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-01 04:40:13.694390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-01 04:40:13.695832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-01 04:40:13.695870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-01 04:40:13.696479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-01 04:40:13.697471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-01 04:40:13.698471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-01 04:40:13.698811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-01 04:40:13.699266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-01 04:40:13.701242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-01 04:40:13.701764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-01 04:40:13.702220 | orchestrator | 2025-06-01 04:40:13.702870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:13.703205 | orchestrator | Sunday 01 June 2025 04:40:13 +0000 (0:00:00.396) 0:00:48.387 *********** 2025-06-01 04:40:13.888924 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:13.889294 | orchestrator | 2025-06-01 04:40:13.889805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:13.890810 | orchestrator | Sunday 01 June 2025 04:40:13 +0000 (0:00:00.197) 0:00:48.585 *********** 2025-06-01 04:40:14.072654 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:14.073039 | orchestrator | 2025-06-01 04:40:14.073748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:14.074414 | orchestrator | Sunday 01 June 2025 04:40:14 +0000 (0:00:00.184) 0:00:48.769 *********** 2025-06-01 04:40:14.272597 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:14.272804 | orchestrator | 2025-06-01 04:40:14.273865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:14.274550 | orchestrator | Sunday 01 June 2025 04:40:14 +0000 (0:00:00.199) 0:00:48.969 *********** 2025-06-01 04:40:14.477852 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:14.478116 | orchestrator | 2025-06-01 04:40:14.479039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:14.481128 | orchestrator | Sunday 01 June 2025 04:40:14 +0000 (0:00:00.204) 0:00:49.174 *********** 2025-06-01 04:40:14.662656 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:14.664182 | orchestrator | 2025-06-01 04:40:14.664995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:14.665805 | orchestrator | Sunday 01 June 2025 04:40:14 +0000 (0:00:00.184) 0:00:49.359 *********** 2025-06-01 04:40:15.265726 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:15.265965 | orchestrator | 2025-06-01 04:40:15.267068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:15.268963 | orchestrator | Sunday 01 June 2025 04:40:15 +0000 (0:00:00.602) 0:00:49.961 *********** 2025-06-01 04:40:15.454805 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:15.454905 | orchestrator | 2025-06-01 04:40:15.454987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:15.455244 | orchestrator | Sunday 01 June 2025 04:40:15 +0000 (0:00:00.190) 0:00:50.152 *********** 2025-06-01 04:40:15.638678 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:15.638779 | orchestrator | 2025-06-01 04:40:15.639495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:15.639879 | orchestrator | Sunday 01 June 2025 04:40:15 +0000 (0:00:00.182) 0:00:50.335 *********** 2025-06-01 04:40:16.042812 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403) 2025-06-01 04:40:16.043047 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403) 2025-06-01 04:40:16.043635 | orchestrator | 2025-06-01 04:40:16.044676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:16.045266 | orchestrator | Sunday 01 June 2025 04:40:16 +0000 (0:00:00.403) 0:00:50.738 *********** 2025-06-01 04:40:16.455353 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af) 2025-06-01 04:40:16.455577 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af) 2025-06-01 04:40:16.456207 | orchestrator | 2025-06-01 04:40:16.456803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:16.457288 | orchestrator | Sunday 01 June 2025 04:40:16 +0000 (0:00:00.414) 0:00:51.153 *********** 2025-06-01 04:40:16.873036 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c) 2025-06-01 04:40:16.873150 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c) 2025-06-01 04:40:16.873289 | orchestrator | 2025-06-01 04:40:16.873811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:16.873883 | orchestrator | Sunday 01 June 2025 04:40:16 +0000 (0:00:00.416) 0:00:51.570 *********** 2025-06-01 04:40:17.289060 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2) 2025-06-01 04:40:17.289800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2) 2025-06-01 04:40:17.290099 | orchestrator | 2025-06-01 04:40:17.291046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 04:40:17.291948 | orchestrator | Sunday 01 June 2025 04:40:17 +0000 (0:00:00.415) 0:00:51.985 *********** 2025-06-01 04:40:17.608959 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 04:40:17.610933 | orchestrator | 2025-06-01 04:40:17.611351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:17.611704 | orchestrator | Sunday 01 June 2025 04:40:17 +0000 (0:00:00.321) 0:00:52.306 *********** 2025-06-01 04:40:18.042996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-01 04:40:18.043438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-01 04:40:18.044782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-01 04:40:18.046185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-01 04:40:18.048783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-01 04:40:18.048819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-01 04:40:18.049596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-01 04:40:18.050458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-01 04:40:18.051725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-01 04:40:18.052711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-01 04:40:18.053507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-01 04:40:18.054685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-01 04:40:18.055645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-01 04:40:18.056376 | orchestrator | 2025-06-01 04:40:18.057317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:18.058404 | orchestrator | Sunday 01 June 2025 04:40:18 +0000 (0:00:00.429) 0:00:52.736 *********** 2025-06-01 04:40:18.231665 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:18.232229 | orchestrator | 2025-06-01 04:40:18.232820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:18.233348 | orchestrator | Sunday 01 June 2025 04:40:18 +0000 (0:00:00.191) 0:00:52.928 *********** 2025-06-01 04:40:18.439088 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:18.439893 | orchestrator | 2025-06-01 04:40:18.440969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:18.442788 | orchestrator | Sunday 01 June 2025 04:40:18 +0000 (0:00:00.207) 0:00:53.135 *********** 2025-06-01 04:40:19.034765 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:19.034935 | orchestrator | 2025-06-01 04:40:19.036867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:19.037189 | orchestrator | Sunday 01 June 2025 04:40:19 +0000 (0:00:00.593) 0:00:53.729 *********** 2025-06-01 04:40:19.245571 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:19.245673 | orchestrator | 2025-06-01 04:40:19.247598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:19.247653 | orchestrator | Sunday 01 June 2025 04:40:19 +0000 (0:00:00.213) 0:00:53.942 *********** 2025-06-01 04:40:19.447951 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:19.448772 | orchestrator | 2025-06-01 04:40:19.449589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:19.450792 | orchestrator | Sunday 01 June 2025 04:40:19 +0000 (0:00:00.201) 0:00:54.144 *********** 2025-06-01 04:40:19.640932 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:19.641388 | orchestrator | 2025-06-01 04:40:19.641862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:19.643188 | orchestrator | Sunday 01 June 2025 04:40:19 +0000 (0:00:00.193) 0:00:54.338 *********** 2025-06-01 04:40:19.833639 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:19.834005 | orchestrator | 2025-06-01 04:40:19.834962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:19.836131 | orchestrator | Sunday 01 June 2025 04:40:19 +0000 (0:00:00.192) 0:00:54.530 *********** 2025-06-01 04:40:20.028995 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:20.029453 | orchestrator | 2025-06-01 04:40:20.030088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:20.030442 | orchestrator | Sunday 01 June 2025 04:40:20 +0000 (0:00:00.194) 0:00:54.725 *********** 2025-06-01 04:40:20.673167 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-01 04:40:20.673763 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-01 04:40:20.673793 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-01 04:40:20.673806 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-01 04:40:20.674097 | orchestrator | 2025-06-01 04:40:20.674305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:20.674654 | orchestrator | Sunday 01 June 2025 04:40:20 +0000 (0:00:00.645) 0:00:55.371 *********** 2025-06-01 04:40:20.883407 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:20.884016 | orchestrator | 2025-06-01 04:40:20.884459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:20.885203 | orchestrator | Sunday 01 June 2025 04:40:20 +0000 (0:00:00.209) 0:00:55.580 *********** 2025-06-01 04:40:21.074249 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:21.074350 | orchestrator | 2025-06-01 04:40:21.074818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:21.074845 | orchestrator | Sunday 01 June 2025 04:40:21 +0000 (0:00:00.189) 0:00:55.770 *********** 2025-06-01 04:40:21.273064 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:21.273225 | orchestrator | 2025-06-01 04:40:21.273972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 04:40:21.276997 | orchestrator | Sunday 01 June 2025 04:40:21 +0000 (0:00:00.198) 0:00:55.969 *********** 2025-06-01 04:40:21.465641 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:21.466014 | orchestrator | 2025-06-01 04:40:21.466500 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 04:40:21.467121 | orchestrator | Sunday 01 June 2025 04:40:21 +0000 (0:00:00.194) 0:00:56.163 *********** 2025-06-01 04:40:21.792208 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:21.792306 | orchestrator | 2025-06-01 04:40:21.792743 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 04:40:21.793227 | orchestrator | Sunday 01 June 2025 04:40:21 +0000 (0:00:00.326) 0:00:56.489 *********** 2025-06-01 04:40:21.995173 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}}) 2025-06-01 04:40:21.995270 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}}) 2025-06-01 04:40:21.995420 | orchestrator | 2025-06-01 04:40:21.996035 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 04:40:21.996128 | orchestrator | Sunday 01 June 2025 04:40:21 +0000 (0:00:00.203) 0:00:56.693 *********** 2025-06-01 04:40:24.121344 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}) 2025-06-01 04:40:24.121597 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}) 2025-06-01 04:40:24.122431 | orchestrator | 2025-06-01 04:40:24.122952 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 04:40:24.123886 | orchestrator | Sunday 01 June 2025 04:40:24 +0000 (0:00:02.124) 0:00:58.817 *********** 2025-06-01 04:40:24.270861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:24.271228 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:24.271643 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:24.272727 | orchestrator | 2025-06-01 04:40:24.273581 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 04:40:24.275071 | orchestrator | Sunday 01 June 2025 04:40:24 +0000 (0:00:00.150) 0:00:58.968 *********** 2025-06-01 04:40:25.583065 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}) 2025-06-01 04:40:25.583757 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}) 2025-06-01 04:40:25.586244 | orchestrator | 2025-06-01 04:40:25.586320 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 04:40:25.586901 | orchestrator | Sunday 01 June 2025 04:40:25 +0000 (0:00:01.310) 0:01:00.278 *********** 2025-06-01 04:40:25.732828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:25.733457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:25.734480 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:25.735033 | orchestrator | 2025-06-01 04:40:25.735419 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 04:40:25.736107 | orchestrator | Sunday 01 June 2025 04:40:25 +0000 (0:00:00.151) 0:01:00.430 *********** 2025-06-01 04:40:25.866700 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:25.866802 | orchestrator | 2025-06-01 04:40:25.868772 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 04:40:25.869202 | orchestrator | Sunday 01 June 2025 04:40:25 +0000 (0:00:00.133) 0:01:00.563 *********** 2025-06-01 04:40:26.014289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:26.014612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:26.015864 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:26.017210 | orchestrator | 2025-06-01 04:40:26.017954 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 04:40:26.019058 | orchestrator | Sunday 01 June 2025 04:40:26 +0000 (0:00:00.147) 0:01:00.711 *********** 2025-06-01 04:40:26.153666 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:26.153887 | orchestrator | 2025-06-01 04:40:26.154876 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 04:40:26.156326 | orchestrator | Sunday 01 June 2025 04:40:26 +0000 (0:00:00.139) 0:01:00.851 *********** 2025-06-01 04:40:26.310251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:26.311238 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:26.311586 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:26.312834 | orchestrator | 2025-06-01 04:40:26.313964 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 04:40:26.314783 | orchestrator | Sunday 01 June 2025 04:40:26 +0000 (0:00:00.154) 0:01:01.005 *********** 2025-06-01 04:40:26.454004 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:26.454926 | orchestrator | 2025-06-01 04:40:26.456807 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 04:40:26.457735 | orchestrator | Sunday 01 June 2025 04:40:26 +0000 (0:00:00.145) 0:01:01.151 *********** 2025-06-01 04:40:26.605715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:26.607269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:26.608484 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:26.609785 | orchestrator | 2025-06-01 04:40:26.610631 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 04:40:26.611240 | orchestrator | Sunday 01 June 2025 04:40:26 +0000 (0:00:00.151) 0:01:01.302 *********** 2025-06-01 04:40:26.749613 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:26.750573 | orchestrator | 2025-06-01 04:40:26.752562 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 04:40:26.752674 | orchestrator | Sunday 01 June 2025 04:40:26 +0000 (0:00:00.142) 0:01:01.445 *********** 2025-06-01 04:40:27.095117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:27.096891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:27.098325 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:27.099796 | orchestrator | 2025-06-01 04:40:27.100498 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 04:40:27.101246 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.346) 0:01:01.792 *********** 2025-06-01 04:40:27.248415 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:27.248571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:27.248587 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:27.249563 | orchestrator | 2025-06-01 04:40:27.250082 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 04:40:27.250771 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.150) 0:01:01.942 *********** 2025-06-01 04:40:27.402728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:27.403230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:27.403988 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:27.405226 | orchestrator | 2025-06-01 04:40:27.405663 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 04:40:27.406485 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.156) 0:01:02.098 *********** 2025-06-01 04:40:27.542216 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:27.542316 | orchestrator | 2025-06-01 04:40:27.542881 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 04:40:27.543979 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.140) 0:01:02.238 *********** 2025-06-01 04:40:27.678986 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:27.679091 | orchestrator | 2025-06-01 04:40:27.679181 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 04:40:27.679866 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.137) 0:01:02.376 *********** 2025-06-01 04:40:27.819663 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:27.819943 | orchestrator | 2025-06-01 04:40:27.820080 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 04:40:27.820580 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.140) 0:01:02.517 *********** 2025-06-01 04:40:27.968372 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 04:40:27.968879 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 04:40:27.969978 | orchestrator | } 2025-06-01 04:40:27.971344 | orchestrator | 2025-06-01 04:40:27.972203 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 04:40:27.974118 | orchestrator | Sunday 01 June 2025 04:40:27 +0000 (0:00:00.148) 0:01:02.665 *********** 2025-06-01 04:40:28.103112 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 04:40:28.103232 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 04:40:28.103447 | orchestrator | } 2025-06-01 04:40:28.103990 | orchestrator | 2025-06-01 04:40:28.104761 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 04:40:28.105465 | orchestrator | Sunday 01 June 2025 04:40:28 +0000 (0:00:00.133) 0:01:02.799 *********** 2025-06-01 04:40:28.243300 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 04:40:28.243662 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 04:40:28.244621 | orchestrator | } 2025-06-01 04:40:28.245245 | orchestrator | 2025-06-01 04:40:28.245932 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 04:40:28.246571 | orchestrator | Sunday 01 June 2025 04:40:28 +0000 (0:00:00.140) 0:01:02.939 *********** 2025-06-01 04:40:28.792852 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:28.792957 | orchestrator | 2025-06-01 04:40:28.794187 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 04:40:28.795124 | orchestrator | Sunday 01 June 2025 04:40:28 +0000 (0:00:00.549) 0:01:03.489 *********** 2025-06-01 04:40:29.297246 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:29.297423 | orchestrator | 2025-06-01 04:40:29.298104 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 04:40:29.298872 | orchestrator | Sunday 01 June 2025 04:40:29 +0000 (0:00:00.504) 0:01:03.993 *********** 2025-06-01 04:40:29.795585 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:29.795906 | orchestrator | 2025-06-01 04:40:29.797008 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 04:40:29.797778 | orchestrator | Sunday 01 June 2025 04:40:29 +0000 (0:00:00.498) 0:01:04.492 *********** 2025-06-01 04:40:30.144599 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:30.145331 | orchestrator | 2025-06-01 04:40:30.146327 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 04:40:30.147306 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.349) 0:01:04.841 *********** 2025-06-01 04:40:30.254389 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:30.255320 | orchestrator | 2025-06-01 04:40:30.255767 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 04:40:30.256623 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.109) 0:01:04.951 *********** 2025-06-01 04:40:30.382799 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:30.383381 | orchestrator | 2025-06-01 04:40:30.383994 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 04:40:30.384598 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.128) 0:01:05.080 *********** 2025-06-01 04:40:30.529136 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 04:40:30.529732 | orchestrator |  "vgs_report": { 2025-06-01 04:40:30.530768 | orchestrator |  "vg": [] 2025-06-01 04:40:30.531871 | orchestrator |  } 2025-06-01 04:40:30.532908 | orchestrator | } 2025-06-01 04:40:30.534189 | orchestrator | 2025-06-01 04:40:30.534741 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 04:40:30.535315 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.145) 0:01:05.226 *********** 2025-06-01 04:40:30.663793 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:30.665110 | orchestrator | 2025-06-01 04:40:30.665752 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 04:40:30.666475 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.133) 0:01:05.360 *********** 2025-06-01 04:40:30.804450 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:30.805049 | orchestrator | 2025-06-01 04:40:30.805908 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 04:40:30.806751 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.140) 0:01:05.500 *********** 2025-06-01 04:40:30.948129 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:30.948770 | orchestrator | 2025-06-01 04:40:30.949505 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 04:40:30.950500 | orchestrator | Sunday 01 June 2025 04:40:30 +0000 (0:00:00.144) 0:01:05.645 *********** 2025-06-01 04:40:31.081405 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:31.082107 | orchestrator | 2025-06-01 04:40:31.083174 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 04:40:31.084097 | orchestrator | Sunday 01 June 2025 04:40:31 +0000 (0:00:00.133) 0:01:05.778 *********** 2025-06-01 04:40:31.212809 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:31.213468 | orchestrator | 2025-06-01 04:40:31.214659 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 04:40:31.215691 | orchestrator | Sunday 01 June 2025 04:40:31 +0000 (0:00:00.130) 0:01:05.909 *********** 2025-06-01 04:40:31.342470 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:31.342893 | orchestrator | 2025-06-01 04:40:31.344154 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 04:40:31.344876 | orchestrator | Sunday 01 June 2025 04:40:31 +0000 (0:00:00.128) 0:01:06.038 *********** 2025-06-01 04:40:31.463853 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:31.464062 | orchestrator | 2025-06-01 04:40:31.465152 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 04:40:31.465417 | orchestrator | Sunday 01 June 2025 04:40:31 +0000 (0:00:00.122) 0:01:06.161 *********** 2025-06-01 04:40:31.589242 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:31.589428 | orchestrator | 2025-06-01 04:40:31.590488 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 04:40:31.591372 | orchestrator | Sunday 01 June 2025 04:40:31 +0000 (0:00:00.125) 0:01:06.286 *********** 2025-06-01 04:40:31.900716 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:31.901664 | orchestrator | 2025-06-01 04:40:31.902148 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 04:40:31.903088 | orchestrator | Sunday 01 June 2025 04:40:31 +0000 (0:00:00.311) 0:01:06.598 *********** 2025-06-01 04:40:32.026221 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.026801 | orchestrator | 2025-06-01 04:40:32.027326 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 04:40:32.028187 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.125) 0:01:06.723 *********** 2025-06-01 04:40:32.148063 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.148677 | orchestrator | 2025-06-01 04:40:32.149188 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 04:40:32.149955 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.121) 0:01:06.845 *********** 2025-06-01 04:40:32.275784 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.276274 | orchestrator | 2025-06-01 04:40:32.276901 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 04:40:32.277746 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.128) 0:01:06.973 *********** 2025-06-01 04:40:32.392878 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.392965 | orchestrator | 2025-06-01 04:40:32.393046 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 04:40:32.393341 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.116) 0:01:07.089 *********** 2025-06-01 04:40:32.530247 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.530735 | orchestrator | 2025-06-01 04:40:32.531624 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 04:40:32.532291 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.137) 0:01:07.226 *********** 2025-06-01 04:40:32.698483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:32.698589 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:32.698660 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.699504 | orchestrator | 2025-06-01 04:40:32.700015 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 04:40:32.700318 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.169) 0:01:07.396 *********** 2025-06-01 04:40:32.842272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:32.842470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:32.843485 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.844275 | orchestrator | 2025-06-01 04:40:32.844819 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 04:40:32.846505 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.143) 0:01:07.539 *********** 2025-06-01 04:40:32.988174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:32.989222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:32.989910 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:32.990891 | orchestrator | 2025-06-01 04:40:32.992075 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 04:40:32.992422 | orchestrator | Sunday 01 June 2025 04:40:32 +0000 (0:00:00.145) 0:01:07.685 *********** 2025-06-01 04:40:33.129376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:33.129488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:33.131023 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:33.131745 | orchestrator | 2025-06-01 04:40:33.133216 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 04:40:33.133301 | orchestrator | Sunday 01 June 2025 04:40:33 +0000 (0:00:00.140) 0:01:07.826 *********** 2025-06-01 04:40:33.279138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:33.280113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:33.280370 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:33.283005 | orchestrator | 2025-06-01 04:40:33.283042 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 04:40:33.284060 | orchestrator | Sunday 01 June 2025 04:40:33 +0000 (0:00:00.149) 0:01:07.976 *********** 2025-06-01 04:40:33.427763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:33.427937 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:33.428565 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:33.429691 | orchestrator | 2025-06-01 04:40:33.431070 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 04:40:33.431104 | orchestrator | Sunday 01 June 2025 04:40:33 +0000 (0:00:00.148) 0:01:08.124 *********** 2025-06-01 04:40:33.778875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:33.779562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:33.780188 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:33.782559 | orchestrator | 2025-06-01 04:40:33.782584 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 04:40:33.782961 | orchestrator | Sunday 01 June 2025 04:40:33 +0000 (0:00:00.351) 0:01:08.475 *********** 2025-06-01 04:40:33.926419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:33.926872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:33.927780 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:33.928973 | orchestrator | 2025-06-01 04:40:33.930698 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 04:40:33.932327 | orchestrator | Sunday 01 June 2025 04:40:33 +0000 (0:00:00.147) 0:01:08.623 *********** 2025-06-01 04:40:34.462654 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:34.463796 | orchestrator | 2025-06-01 04:40:34.464157 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 04:40:34.465910 | orchestrator | Sunday 01 June 2025 04:40:34 +0000 (0:00:00.535) 0:01:09.158 *********** 2025-06-01 04:40:34.966462 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:34.966725 | orchestrator | 2025-06-01 04:40:34.967671 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 04:40:34.968326 | orchestrator | Sunday 01 June 2025 04:40:34 +0000 (0:00:00.504) 0:01:09.663 *********** 2025-06-01 04:40:35.126435 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:35.126595 | orchestrator | 2025-06-01 04:40:35.126680 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 04:40:35.127012 | orchestrator | Sunday 01 June 2025 04:40:35 +0000 (0:00:00.160) 0:01:09.823 *********** 2025-06-01 04:40:35.302765 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'vg_name': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}) 2025-06-01 04:40:35.303324 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'vg_name': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}) 2025-06-01 04:40:35.303890 | orchestrator | 2025-06-01 04:40:35.304806 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 04:40:35.306491 | orchestrator | Sunday 01 June 2025 04:40:35 +0000 (0:00:00.176) 0:01:09.999 *********** 2025-06-01 04:40:35.457688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:35.457828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:35.459232 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:35.459958 | orchestrator | 2025-06-01 04:40:35.460307 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 04:40:35.462066 | orchestrator | Sunday 01 June 2025 04:40:35 +0000 (0:00:00.154) 0:01:10.154 *********** 2025-06-01 04:40:35.626073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:35.630060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:35.632460 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:35.634948 | orchestrator | 2025-06-01 04:40:35.634979 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 04:40:35.635052 | orchestrator | Sunday 01 June 2025 04:40:35 +0000 (0:00:00.169) 0:01:10.323 *********** 2025-06-01 04:40:35.784590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'})  2025-06-01 04:40:35.785823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'})  2025-06-01 04:40:35.786889 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:35.789327 | orchestrator | 2025-06-01 04:40:35.789958 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 04:40:35.790150 | orchestrator | Sunday 01 June 2025 04:40:35 +0000 (0:00:00.158) 0:01:10.481 *********** 2025-06-01 04:40:35.942468 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 04:40:35.942630 | orchestrator |  "lvm_report": { 2025-06-01 04:40:35.942736 | orchestrator |  "lv": [ 2025-06-01 04:40:35.943335 | orchestrator |  { 2025-06-01 04:40:35.943759 | orchestrator |  "lv_name": "osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9", 2025-06-01 04:40:35.944393 | orchestrator |  "vg_name": "ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9" 2025-06-01 04:40:35.944704 | orchestrator |  }, 2025-06-01 04:40:35.945751 | orchestrator |  { 2025-06-01 04:40:35.945965 | orchestrator |  "lv_name": "osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f", 2025-06-01 04:40:35.946284 | orchestrator |  "vg_name": "ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f" 2025-06-01 04:40:35.946812 | orchestrator |  } 2025-06-01 04:40:35.948002 | orchestrator |  ], 2025-06-01 04:40:35.948087 | orchestrator |  "pv": [ 2025-06-01 04:40:35.948503 | orchestrator |  { 2025-06-01 04:40:35.948639 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 04:40:35.949070 | orchestrator |  "vg_name": "ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f" 2025-06-01 04:40:35.950005 | orchestrator |  }, 2025-06-01 04:40:35.950797 | orchestrator |  { 2025-06-01 04:40:35.950988 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 04:40:35.951108 | orchestrator |  "vg_name": "ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9" 2025-06-01 04:40:35.951392 | orchestrator |  } 2025-06-01 04:40:35.952171 | orchestrator |  ] 2025-06-01 04:40:35.952551 | orchestrator |  } 2025-06-01 04:40:35.952974 | orchestrator | } 2025-06-01 04:40:35.953295 | orchestrator | 2025-06-01 04:40:35.953826 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:40:35.954124 | orchestrator | 2025-06-01 04:40:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 04:40:35.954408 | orchestrator | 2025-06-01 04:40:35 | INFO  | Please wait and do not abort execution. 2025-06-01 04:40:35.954871 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 04:40:35.955247 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 04:40:35.955749 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 04:40:35.956936 | orchestrator | 2025-06-01 04:40:35.957127 | orchestrator | 2025-06-01 04:40:35.957678 | orchestrator | 2025-06-01 04:40:35.958135 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:40:35.958247 | orchestrator | Sunday 01 June 2025 04:40:35 +0000 (0:00:00.157) 0:01:10.639 *********** 2025-06-01 04:40:35.958635 | orchestrator | =============================================================================== 2025-06-01 04:40:35.959112 | orchestrator | Create block VGs -------------------------------------------------------- 6.44s 2025-06-01 04:40:35.959461 | orchestrator | Create block LVs -------------------------------------------------------- 3.98s 2025-06-01 04:40:35.959802 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.89s 2025-06-01 04:40:35.960157 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2025-06-01 04:40:35.960561 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-06-01 04:40:35.960893 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.52s 2025-06-01 04:40:35.961177 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2025-06-01 04:40:35.961461 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2025-06-01 04:40:35.961740 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2025-06-01 04:40:35.962100 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-06-01 04:40:35.962373 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-06-01 04:40:35.962645 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-06-01 04:40:35.962890 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-06-01 04:40:35.963164 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-06-01 04:40:35.963589 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.68s 2025-06-01 04:40:35.964507 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.67s 2025-06-01 04:40:35.964832 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.65s 2025-06-01 04:40:35.965123 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-01 04:40:35.965421 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.64s 2025-06-01 04:40:35.965948 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.64s 2025-06-01 04:40:38.171920 | orchestrator | Registering Redlock._acquired_script 2025-06-01 04:40:38.172023 | orchestrator | Registering Redlock._extend_script 2025-06-01 04:40:38.172038 | orchestrator | Registering Redlock._release_script 2025-06-01 04:40:38.228233 | orchestrator | 2025-06-01 04:40:38 | INFO  | Task 00a10dda-4084-4569-bdeb-ce5bb49ae01e (facts) was prepared for execution. 2025-06-01 04:40:38.228305 | orchestrator | 2025-06-01 04:40:38 | INFO  | It takes a moment until task 00a10dda-4084-4569-bdeb-ce5bb49ae01e (facts) has been started and output is visible here. 2025-06-01 04:40:42.269238 | orchestrator | 2025-06-01 04:40:42.273708 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 04:40:42.273769 | orchestrator | 2025-06-01 04:40:42.273783 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 04:40:42.273795 | orchestrator | Sunday 01 June 2025 04:40:42 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-06-01 04:40:43.732138 | orchestrator | ok: [testbed-manager] 2025-06-01 04:40:43.735223 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:40:43.735261 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:43.735275 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:40:43.736408 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:40:43.737147 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:40:43.737893 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:43.738859 | orchestrator | 2025-06-01 04:40:43.739298 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 04:40:43.740233 | orchestrator | Sunday 01 June 2025 04:40:43 +0000 (0:00:01.460) 0:00:01.718 *********** 2025-06-01 04:40:43.889460 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:40:43.967247 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:40:44.045833 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:40:44.124677 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:40:44.200330 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:40:44.891727 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:44.895499 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:44.896613 | orchestrator | 2025-06-01 04:40:44.897856 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 04:40:44.898852 | orchestrator | 2025-06-01 04:40:44.899784 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 04:40:44.900363 | orchestrator | Sunday 01 June 2025 04:40:44 +0000 (0:00:01.162) 0:00:02.880 *********** 2025-06-01 04:40:49.677036 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:40:49.677999 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:40:49.683207 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:40:49.683238 | orchestrator | ok: [testbed-manager] 2025-06-01 04:40:49.683251 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:40:49.683926 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:40:49.684819 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:40:49.685402 | orchestrator | 2025-06-01 04:40:49.686281 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 04:40:49.686789 | orchestrator | 2025-06-01 04:40:49.688415 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 04:40:49.688727 | orchestrator | Sunday 01 June 2025 04:40:49 +0000 (0:00:04.788) 0:00:07.668 *********** 2025-06-01 04:40:49.828888 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:40:49.918894 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:40:49.999909 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:40:50.076769 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:40:50.155399 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:40:50.187329 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:40:50.188793 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:40:50.189685 | orchestrator | 2025-06-01 04:40:50.190718 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:40:50.191246 | orchestrator | 2025-06-01 04:40:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 04:40:50.191772 | orchestrator | 2025-06-01 04:40:50 | INFO  | Please wait and do not abort execution. 2025-06-01 04:40:50.193248 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.193804 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.194583 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.195309 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.196082 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.196888 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.197780 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:40:50.198610 | orchestrator | 2025-06-01 04:40:50.199075 | orchestrator | 2025-06-01 04:40:50.199641 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:40:50.200145 | orchestrator | Sunday 01 June 2025 04:40:50 +0000 (0:00:00.510) 0:00:08.179 *********** 2025-06-01 04:40:50.200671 | orchestrator | =============================================================================== 2025-06-01 04:40:50.201244 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2025-06-01 04:40:50.201834 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.46s 2025-06-01 04:40:50.202291 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2025-06-01 04:40:50.202801 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-06-01 04:40:50.896152 | orchestrator | 2025-06-01 04:40:50.898206 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 1 04:40:50 UTC 2025 2025-06-01 04:40:50.898271 | orchestrator | 2025-06-01 04:40:52.589712 | orchestrator | 2025-06-01 04:40:52 | INFO  | Collection nutshell is prepared for execution 2025-06-01 04:40:52.590641 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [0] - dotfiles 2025-06-01 04:40:52.595074 | orchestrator | Registering Redlock._acquired_script 2025-06-01 04:40:52.595115 | orchestrator | Registering Redlock._extend_script 2025-06-01 04:40:52.595128 | orchestrator | Registering Redlock._release_script 2025-06-01 04:40:52.600815 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [0] - homer 2025-06-01 04:40:52.600843 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [0] - netdata 2025-06-01 04:40:52.600855 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [0] - openstackclient 2025-06-01 04:40:52.600867 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [0] - phpmyadmin 2025-06-01 04:40:52.600878 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [0] - common 2025-06-01 04:40:52.602484 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [1] -- loadbalancer 2025-06-01 04:40:52.602635 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [2] --- opensearch 2025-06-01 04:40:52.602649 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [2] --- mariadb-ng 2025-06-01 04:40:52.602871 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [3] ---- horizon 2025-06-01 04:40:52.602899 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [3] ---- keystone 2025-06-01 04:40:52.603072 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [4] ----- neutron 2025-06-01 04:40:52.603363 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [5] ------ wait-for-nova 2025-06-01 04:40:52.603389 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [5] ------ octavia 2025-06-01 04:40:52.603992 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [4] ----- barbican 2025-06-01 04:40:52.604149 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [4] ----- designate 2025-06-01 04:40:52.604169 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [4] ----- ironic 2025-06-01 04:40:52.604376 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [4] ----- placement 2025-06-01 04:40:52.604399 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [4] ----- magnum 2025-06-01 04:40:52.604928 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [1] -- openvswitch 2025-06-01 04:40:52.605234 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [2] --- ovn 2025-06-01 04:40:52.605587 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [1] -- memcached 2025-06-01 04:40:52.605635 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [1] -- redis 2025-06-01 04:40:52.605716 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [1] -- rabbitmq-ng 2025-06-01 04:40:52.606061 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [0] - kubernetes 2025-06-01 04:40:52.607586 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [1] -- kubeconfig 2025-06-01 04:40:52.607846 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [1] -- copy-kubeconfig 2025-06-01 04:40:52.607869 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [0] - ceph 2025-06-01 04:40:52.609684 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [1] -- ceph-pools 2025-06-01 04:40:52.609809 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [2] --- copy-ceph-keys 2025-06-01 04:40:52.610013 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [3] ---- cephclient 2025-06-01 04:40:52.610117 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-01 04:40:52.610133 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [4] ----- wait-for-keystone 2025-06-01 04:40:52.610145 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-01 04:40:52.610253 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [5] ------ glance 2025-06-01 04:40:52.610310 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [5] ------ cinder 2025-06-01 04:40:52.610546 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [5] ------ nova 2025-06-01 04:40:52.610579 | orchestrator | 2025-06-01 04:40:52 | INFO  | A [4] ----- prometheus 2025-06-01 04:40:52.610724 | orchestrator | 2025-06-01 04:40:52 | INFO  | D [5] ------ grafana 2025-06-01 04:40:52.807579 | orchestrator | 2025-06-01 04:40:52 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-01 04:40:52.807676 | orchestrator | 2025-06-01 04:40:52 | INFO  | Tasks are running in the background 2025-06-01 04:40:55.545175 | orchestrator | 2025-06-01 04:40:55 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-01 04:40:57.666303 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:40:57.667247 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:40:57.667306 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:40:57.670249 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:40:57.673901 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:40:57.675197 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:40:57.675895 | orchestrator | 2025-06-01 04:40:57 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:40:57.675924 | orchestrator | 2025-06-01 04:40:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:00.727765 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:41:00.727872 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:00.727888 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:00.727900 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:00.727993 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:00.728348 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:00.728862 | orchestrator | 2025-06-01 04:41:00 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:00.728927 | orchestrator | 2025-06-01 04:41:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:03.767333 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:41:03.767440 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:03.767456 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:03.767743 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:03.768208 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:03.771644 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:03.771915 | orchestrator | 2025-06-01 04:41:03 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:03.771940 | orchestrator | 2025-06-01 04:41:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:06.850398 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:41:06.850502 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:06.850678 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:06.852191 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:06.853981 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:06.854232 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:06.856069 | orchestrator | 2025-06-01 04:41:06 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:06.856092 | orchestrator | 2025-06-01 04:41:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:09.903040 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:41:09.903297 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:09.903874 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:09.908661 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:09.909195 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:09.910652 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:09.913178 | orchestrator | 2025-06-01 04:41:09 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:09.913208 | orchestrator | 2025-06-01 04:41:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:12.982272 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:41:12.982798 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:12.984724 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:12.989119 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:12.990164 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:12.997147 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:12.997211 | orchestrator | 2025-06-01 04:41:12 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:12.997224 | orchestrator | 2025-06-01 04:41:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:16.066489 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state STARTED 2025-06-01 04:41:16.069872 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:16.069945 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:16.069957 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:16.069967 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:16.072335 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:16.072752 | orchestrator | 2025-06-01 04:41:16 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:16.072772 | orchestrator | 2025-06-01 04:41:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:19.117606 | orchestrator | 2025-06-01 04:41:19.117715 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-01 04:41:19.117732 | orchestrator | 2025-06-01 04:41:19.117744 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-01 04:41:19.117755 | orchestrator | Sunday 01 June 2025 04:41:04 +0000 (0:00:00.478) 0:00:00.478 *********** 2025-06-01 04:41:19.117766 | orchestrator | changed: [testbed-manager] 2025-06-01 04:41:19.117779 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:41:19.117790 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:41:19.117801 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:41:19.117812 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:41:19.117822 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:41:19.117833 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:41:19.117844 | orchestrator | 2025-06-01 04:41:19.117855 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-01 04:41:19.117866 | orchestrator | Sunday 01 June 2025 04:41:08 +0000 (0:00:03.996) 0:00:04.474 *********** 2025-06-01 04:41:19.117877 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-01 04:41:19.117888 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 04:41:19.117899 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 04:41:19.117910 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 04:41:19.117921 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 04:41:19.117932 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 04:41:19.117942 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 04:41:19.117953 | orchestrator | 2025-06-01 04:41:19.117964 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-01 04:41:19.117976 | orchestrator | Sunday 01 June 2025 04:41:09 +0000 (0:00:01.879) 0:00:06.353 *********** 2025-06-01 04:41:19.118096 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:08.449755', 'end': '2025-06-01 04:41:08.453302', 'delta': '0:00:00.003547', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118119 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:08.595090', 'end': '2025-06-01 04:41:08.603562', 'delta': '0:00:00.008472', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118157 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:08.865911', 'end': '2025-06-01 04:41:08.874692', 'delta': '0:00:00.008781', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118201 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:09.250244', 'end': '2025-06-01 04:41:09.259067', 'delta': '0:00:00.008823', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118215 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:09.441103', 'end': '2025-06-01 04:41:09.449978', 'delta': '0:00:00.008875', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118234 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:09.604265', 'end': '2025-06-01 04:41:09.612633', 'delta': '0:00:00.008368', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118247 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 04:41:09.698137', 'end': '2025-06-01 04:41:09.707887', 'delta': '0:00:00.009750', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 04:41:19.118269 | orchestrator | 2025-06-01 04:41:19.118281 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-01 04:41:19.118294 | orchestrator | Sunday 01 June 2025 04:41:11 +0000 (0:00:02.001) 0:00:08.354 *********** 2025-06-01 04:41:19.118366 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-01 04:41:19.118380 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 04:41:19.118392 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 04:41:19.118403 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 04:41:19.118413 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 04:41:19.118424 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 04:41:19.118435 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 04:41:19.118445 | orchestrator | 2025-06-01 04:41:19.118456 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-01 04:41:19.118467 | orchestrator | Sunday 01 June 2025 04:41:14 +0000 (0:00:02.464) 0:00:10.819 *********** 2025-06-01 04:41:19.118478 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-01 04:41:19.118489 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 04:41:19.118500 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 04:41:19.118511 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 04:41:19.118551 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 04:41:19.118563 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 04:41:19.118574 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 04:41:19.118584 | orchestrator | 2025-06-01 04:41:19.118595 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:41:19.118616 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118630 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118642 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118653 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118664 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118675 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118685 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:41:19.118696 | orchestrator | 2025-06-01 04:41:19.118707 | orchestrator | 2025-06-01 04:41:19.118718 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:41:19.118729 | orchestrator | Sunday 01 June 2025 04:41:17 +0000 (0:00:03.395) 0:00:14.214 *********** 2025-06-01 04:41:19.118739 | orchestrator | =============================================================================== 2025-06-01 04:41:19.118770 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.00s 2025-06-01 04:41:19.118781 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.40s 2025-06-01 04:41:19.118792 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.46s 2025-06-01 04:41:19.118803 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.00s 2025-06-01 04:41:19.118813 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.88s 2025-06-01 04:41:19.118824 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 7a4b78bd-85e4-443c-ab20-761df1ad41de is in state SUCCESS 2025-06-01 04:41:19.118835 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:19.118934 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:19.118949 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:19.119557 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:19.124810 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:19.124841 | orchestrator | 2025-06-01 04:41:19 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:19.124853 | orchestrator | 2025-06-01 04:41:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:22.191175 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:22.191286 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:22.192964 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:22.202255 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:22.202622 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:22.206878 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:22.211407 | orchestrator | 2025-06-01 04:41:22 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:22.211460 | orchestrator | 2025-06-01 04:41:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:25.255878 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:25.258650 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:25.258973 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:25.262754 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:25.264099 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:25.266479 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:25.269545 | orchestrator | 2025-06-01 04:41:25 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:25.269586 | orchestrator | 2025-06-01 04:41:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:28.309771 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:28.309921 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:28.309939 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:28.312959 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:28.313003 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:28.313017 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:28.313853 | orchestrator | 2025-06-01 04:41:28 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:28.313878 | orchestrator | 2025-06-01 04:41:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:31.346003 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:31.346634 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:31.347745 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:31.349836 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:31.352290 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:31.353554 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:31.355017 | orchestrator | 2025-06-01 04:41:31 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:31.355629 | orchestrator | 2025-06-01 04:41:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:34.399295 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:34.401481 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:34.407301 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:34.413867 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:34.413930 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:34.413943 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:34.413954 | orchestrator | 2025-06-01 04:41:34 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:34.413966 | orchestrator | 2025-06-01 04:41:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:37.455365 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:37.455513 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:37.455660 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state STARTED 2025-06-01 04:41:37.456580 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:37.459571 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:37.461177 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:37.462091 | orchestrator | 2025-06-01 04:41:37 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:37.462479 | orchestrator | 2025-06-01 04:41:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:40.510996 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:40.513902 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:40.513961 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 584ac817-339d-4e51-82c6-ecbd8626c45f is in state SUCCESS 2025-06-01 04:41:40.514100 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:40.515321 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:40.516601 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:40.518111 | orchestrator | 2025-06-01 04:41:40 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:40.518187 | orchestrator | 2025-06-01 04:41:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:43.561618 | orchestrator | 2025-06-01 04:41:43 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:43.562075 | orchestrator | 2025-06-01 04:41:43 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:43.563051 | orchestrator | 2025-06-01 04:41:43 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:43.565200 | orchestrator | 2025-06-01 04:41:43 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:43.566131 | orchestrator | 2025-06-01 04:41:43 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:43.568443 | orchestrator | 2025-06-01 04:41:43 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:43.568563 | orchestrator | 2025-06-01 04:41:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:46.610252 | orchestrator | 2025-06-01 04:41:46 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:46.610397 | orchestrator | 2025-06-01 04:41:46 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:46.611840 | orchestrator | 2025-06-01 04:41:46 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:46.611913 | orchestrator | 2025-06-01 04:41:46 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:46.614385 | orchestrator | 2025-06-01 04:41:46 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:46.617975 | orchestrator | 2025-06-01 04:41:46 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:46.618095 | orchestrator | 2025-06-01 04:41:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:49.686315 | orchestrator | 2025-06-01 04:41:49 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:49.686408 | orchestrator | 2025-06-01 04:41:49 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:49.686419 | orchestrator | 2025-06-01 04:41:49 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:49.686450 | orchestrator | 2025-06-01 04:41:49 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:49.686459 | orchestrator | 2025-06-01 04:41:49 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:49.687866 | orchestrator | 2025-06-01 04:41:49 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state STARTED 2025-06-01 04:41:49.687891 | orchestrator | 2025-06-01 04:41:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:52.721386 | orchestrator | 2025-06-01 04:41:52 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:52.723149 | orchestrator | 2025-06-01 04:41:52 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:52.724689 | orchestrator | 2025-06-01 04:41:52 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:52.726754 | orchestrator | 2025-06-01 04:41:52 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:52.729310 | orchestrator | 2025-06-01 04:41:52 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:52.730234 | orchestrator | 2025-06-01 04:41:52 | INFO  | Task 04b37d65-521e-4205-98e9-8ad6bd7d6c14 is in state SUCCESS 2025-06-01 04:41:52.730330 | orchestrator | 2025-06-01 04:41:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:55.779169 | orchestrator | 2025-06-01 04:41:55 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:55.781212 | orchestrator | 2025-06-01 04:41:55 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:55.781837 | orchestrator | 2025-06-01 04:41:55 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:55.782842 | orchestrator | 2025-06-01 04:41:55 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:55.786694 | orchestrator | 2025-06-01 04:41:55 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:55.786735 | orchestrator | 2025-06-01 04:41:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:41:58.826722 | orchestrator | 2025-06-01 04:41:58 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:41:58.826888 | orchestrator | 2025-06-01 04:41:58 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:41:58.828465 | orchestrator | 2025-06-01 04:41:58 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:41:58.829791 | orchestrator | 2025-06-01 04:41:58 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:41:58.831068 | orchestrator | 2025-06-01 04:41:58 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:41:58.831113 | orchestrator | 2025-06-01 04:41:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:01.869807 | orchestrator | 2025-06-01 04:42:01 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:01.870941 | orchestrator | 2025-06-01 04:42:01 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state STARTED 2025-06-01 04:42:01.872431 | orchestrator | 2025-06-01 04:42:01 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:01.872970 | orchestrator | 2025-06-01 04:42:01 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:01.874514 | orchestrator | 2025-06-01 04:42:01 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:01.874616 | orchestrator | 2025-06-01 04:42:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:04.912393 | orchestrator | 2025-06-01 04:42:04 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:04.913221 | orchestrator | 2025-06-01 04:42:04 | INFO  | Task 66a5c9bc-deb5-4b76-87f4-3cc8ed3d7f71 is in state SUCCESS 2025-06-01 04:42:04.915475 | orchestrator | 2025-06-01 04:42:04.915550 | orchestrator | 2025-06-01 04:42:04.915558 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-01 04:42:04.915563 | orchestrator | 2025-06-01 04:42:04.915568 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-01 04:42:04.915573 | orchestrator | Sunday 01 June 2025 04:41:05 +0000 (0:00:01.134) 0:00:01.134 *********** 2025-06-01 04:42:04.915578 | orchestrator | ok: [testbed-manager] => { 2025-06-01 04:42:04.915584 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-01 04:42:04.915591 | orchestrator | } 2025-06-01 04:42:04.915595 | orchestrator | 2025-06-01 04:42:04.915600 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-01 04:42:04.915604 | orchestrator | Sunday 01 June 2025 04:41:05 +0000 (0:00:00.506) 0:00:01.640 *********** 2025-06-01 04:42:04.915608 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.915614 | orchestrator | 2025-06-01 04:42:04.915618 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-01 04:42:04.915622 | orchestrator | Sunday 01 June 2025 04:41:07 +0000 (0:00:01.412) 0:00:03.053 *********** 2025-06-01 04:42:04.915626 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-01 04:42:04.915631 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-01 04:42:04.915635 | orchestrator | 2025-06-01 04:42:04.915639 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-01 04:42:04.915643 | orchestrator | Sunday 01 June 2025 04:41:08 +0000 (0:00:01.132) 0:00:04.186 *********** 2025-06-01 04:42:04.915647 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915651 | orchestrator | 2025-06-01 04:42:04.915656 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-01 04:42:04.915660 | orchestrator | Sunday 01 June 2025 04:41:10 +0000 (0:00:02.193) 0:00:06.380 *********** 2025-06-01 04:42:04.915664 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915668 | orchestrator | 2025-06-01 04:42:04.915672 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-01 04:42:04.915676 | orchestrator | Sunday 01 June 2025 04:41:12 +0000 (0:00:01.828) 0:00:08.209 *********** 2025-06-01 04:42:04.915680 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-01 04:42:04.915684 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.915688 | orchestrator | 2025-06-01 04:42:04.915693 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-01 04:42:04.915697 | orchestrator | Sunday 01 June 2025 04:41:37 +0000 (0:00:24.841) 0:00:33.050 *********** 2025-06-01 04:42:04.915701 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915705 | orchestrator | 2025-06-01 04:42:04.915709 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:42:04.915713 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.915720 | orchestrator | 2025-06-01 04:42:04.915727 | orchestrator | 2025-06-01 04:42:04.915734 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:42:04.915741 | orchestrator | Sunday 01 June 2025 04:41:38 +0000 (0:00:01.492) 0:00:34.542 *********** 2025-06-01 04:42:04.915748 | orchestrator | =============================================================================== 2025-06-01 04:42:04.915759 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.84s 2025-06-01 04:42:04.915782 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.19s 2025-06-01 04:42:04.915790 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.83s 2025-06-01 04:42:04.915795 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.49s 2025-06-01 04:42:04.915799 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.41s 2025-06-01 04:42:04.915803 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.13s 2025-06-01 04:42:04.915807 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.51s 2025-06-01 04:42:04.915811 | orchestrator | 2025-06-01 04:42:04.915815 | orchestrator | 2025-06-01 04:42:04.915820 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-01 04:42:04.915824 | orchestrator | 2025-06-01 04:42:04.915828 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-01 04:42:04.915832 | orchestrator | Sunday 01 June 2025 04:41:04 +0000 (0:00:00.642) 0:00:00.642 *********** 2025-06-01 04:42:04.915837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-01 04:42:04.915842 | orchestrator | 2025-06-01 04:42:04.915846 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-01 04:42:04.915851 | orchestrator | Sunday 01 June 2025 04:41:04 +0000 (0:00:00.276) 0:00:00.919 *********** 2025-06-01 04:42:04.915855 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-01 04:42:04.915859 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-01 04:42:04.915863 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-01 04:42:04.915868 | orchestrator | 2025-06-01 04:42:04.915872 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-01 04:42:04.915876 | orchestrator | Sunday 01 June 2025 04:41:06 +0000 (0:00:01.416) 0:00:02.335 *********** 2025-06-01 04:42:04.915880 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915885 | orchestrator | 2025-06-01 04:42:04.915889 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-01 04:42:04.915893 | orchestrator | Sunday 01 June 2025 04:41:07 +0000 (0:00:01.610) 0:00:03.945 *********** 2025-06-01 04:42:04.915906 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-01 04:42:04.915910 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.915914 | orchestrator | 2025-06-01 04:42:04.915919 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-01 04:42:04.915923 | orchestrator | Sunday 01 June 2025 04:41:42 +0000 (0:00:34.636) 0:00:38.582 *********** 2025-06-01 04:42:04.915927 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915931 | orchestrator | 2025-06-01 04:42:04.915935 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-01 04:42:04.915939 | orchestrator | Sunday 01 June 2025 04:41:43 +0000 (0:00:00.833) 0:00:39.415 *********** 2025-06-01 04:42:04.915943 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.915947 | orchestrator | 2025-06-01 04:42:04.915951 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-01 04:42:04.915956 | orchestrator | Sunday 01 June 2025 04:41:43 +0000 (0:00:00.520) 0:00:39.936 *********** 2025-06-01 04:42:04.915960 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915964 | orchestrator | 2025-06-01 04:42:04.915968 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-01 04:42:04.915972 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:01.901) 0:00:41.838 *********** 2025-06-01 04:42:04.915976 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915980 | orchestrator | 2025-06-01 04:42:04.915984 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-01 04:42:04.915991 | orchestrator | Sunday 01 June 2025 04:41:47 +0000 (0:00:01.772) 0:00:43.611 *********** 2025-06-01 04:42:04.915995 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.915999 | orchestrator | 2025-06-01 04:42:04.916003 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-01 04:42:04.916008 | orchestrator | Sunday 01 June 2025 04:41:48 +0000 (0:00:00.925) 0:00:44.536 *********** 2025-06-01 04:42:04.916012 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.916016 | orchestrator | 2025-06-01 04:42:04.916020 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:42:04.916024 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916028 | orchestrator | 2025-06-01 04:42:04.916032 | orchestrator | 2025-06-01 04:42:04.916037 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:42:04.916041 | orchestrator | Sunday 01 June 2025 04:41:48 +0000 (0:00:00.417) 0:00:44.954 *********** 2025-06-01 04:42:04.916045 | orchestrator | =============================================================================== 2025-06-01 04:42:04.916052 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.64s 2025-06-01 04:42:04.916060 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.90s 2025-06-01 04:42:04.916064 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.77s 2025-06-01 04:42:04.916068 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.61s 2025-06-01 04:42:04.916072 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.42s 2025-06-01 04:42:04.916076 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.93s 2025-06-01 04:42:04.916082 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.83s 2025-06-01 04:42:04.916087 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.52s 2025-06-01 04:42:04.916091 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2025-06-01 04:42:04.916095 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.28s 2025-06-01 04:42:04.916099 | orchestrator | 2025-06-01 04:42:04.916104 | orchestrator | 2025-06-01 04:42:04.916108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:42:04.916112 | orchestrator | 2025-06-01 04:42:04.916117 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:42:04.916121 | orchestrator | Sunday 01 June 2025 04:41:04 +0000 (0:00:00.678) 0:00:00.678 *********** 2025-06-01 04:42:04.916125 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-01 04:42:04.916129 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-01 04:42:04.916134 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-01 04:42:04.916138 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-01 04:42:04.916142 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-01 04:42:04.916146 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-01 04:42:04.916151 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-01 04:42:04.916155 | orchestrator | 2025-06-01 04:42:04.916159 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-01 04:42:04.916164 | orchestrator | 2025-06-01 04:42:04.916168 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-01 04:42:04.916172 | orchestrator | Sunday 01 June 2025 04:41:06 +0000 (0:00:02.496) 0:00:03.175 *********** 2025-06-01 04:42:04.916185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:42:04.916191 | orchestrator | 2025-06-01 04:42:04.916198 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-01 04:42:04.916203 | orchestrator | Sunday 01 June 2025 04:41:09 +0000 (0:00:03.063) 0:00:06.239 *********** 2025-06-01 04:42:04.916207 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.916212 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:42:04.916216 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:42:04.916220 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:42:04.916225 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:42:04.916232 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:42:04.916236 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:42:04.916241 | orchestrator | 2025-06-01 04:42:04.916245 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-01 04:42:04.916249 | orchestrator | Sunday 01 June 2025 04:41:11 +0000 (0:00:01.874) 0:00:08.114 *********** 2025-06-01 04:42:04.916254 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.916258 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:42:04.916262 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:42:04.916267 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:42:04.916271 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:42:04.916275 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:42:04.916279 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:42:04.916284 | orchestrator | 2025-06-01 04:42:04.916288 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-01 04:42:04.916292 | orchestrator | Sunday 01 June 2025 04:41:15 +0000 (0:00:04.121) 0:00:12.235 *********** 2025-06-01 04:42:04.916297 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.916301 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:42:04.916306 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:42:04.916310 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:42:04.916314 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:42:04.916319 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:42:04.916323 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:42:04.916328 | orchestrator | 2025-06-01 04:42:04.916332 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-01 04:42:04.916337 | orchestrator | Sunday 01 June 2025 04:41:18 +0000 (0:00:02.648) 0:00:14.885 *********** 2025-06-01 04:42:04.916341 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.916345 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:42:04.916350 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:42:04.916354 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:42:04.916358 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:42:04.916363 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:42:04.916367 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:42:04.916371 | orchestrator | 2025-06-01 04:42:04.916376 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-01 04:42:04.916380 | orchestrator | Sunday 01 June 2025 04:41:28 +0000 (0:00:09.654) 0:00:24.540 *********** 2025-06-01 04:42:04.916385 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.916390 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:42:04.916394 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:42:04.916398 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:42:04.916403 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:42:04.916407 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:42:04.916411 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:42:04.916416 | orchestrator | 2025-06-01 04:42:04.916420 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-01 04:42:04.916425 | orchestrator | Sunday 01 June 2025 04:41:43 +0000 (0:00:15.509) 0:00:40.050 *********** 2025-06-01 04:42:04.916430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:42:04.916436 | orchestrator | 2025-06-01 04:42:04.916440 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-01 04:42:04.916450 | orchestrator | Sunday 01 June 2025 04:41:44 +0000 (0:00:01.316) 0:00:41.367 *********** 2025-06-01 04:42:04.916457 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-01 04:42:04.916462 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-01 04:42:04.916466 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-01 04:42:04.916471 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-01 04:42:04.916475 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-01 04:42:04.916479 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-01 04:42:04.916484 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-01 04:42:04.916488 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-01 04:42:04.916492 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-01 04:42:04.916497 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-01 04:42:04.916501 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-01 04:42:04.916505 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-01 04:42:04.916510 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-01 04:42:04.916514 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-01 04:42:04.916537 | orchestrator | 2025-06-01 04:42:04.916542 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-01 04:42:04.916547 | orchestrator | Sunday 01 June 2025 04:41:50 +0000 (0:00:05.637) 0:00:47.005 *********** 2025-06-01 04:42:04.916551 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.916555 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:42:04.916560 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:42:04.916564 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:42:04.916569 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:42:04.916573 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:42:04.916577 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:42:04.916581 | orchestrator | 2025-06-01 04:42:04.916586 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-01 04:42:04.916590 | orchestrator | Sunday 01 June 2025 04:41:51 +0000 (0:00:01.058) 0:00:48.063 *********** 2025-06-01 04:42:04.916595 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.916599 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:42:04.916604 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:42:04.916609 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:42:04.916613 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:42:04.916617 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:42:04.916621 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:42:04.916626 | orchestrator | 2025-06-01 04:42:04.916630 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-01 04:42:04.916638 | orchestrator | Sunday 01 June 2025 04:41:53 +0000 (0:00:01.581) 0:00:49.644 *********** 2025-06-01 04:42:04.916643 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.916648 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:42:04.916652 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:42:04.916657 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:42:04.916661 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:42:04.916665 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:42:04.916670 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:42:04.916674 | orchestrator | 2025-06-01 04:42:04.916679 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-01 04:42:04.916683 | orchestrator | Sunday 01 June 2025 04:41:54 +0000 (0:00:01.401) 0:00:51.046 *********** 2025-06-01 04:42:04.916688 | orchestrator | ok: [testbed-manager] 2025-06-01 04:42:04.916692 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:42:04.916696 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:42:04.916700 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:42:04.916704 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:42:04.916709 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:42:04.916746 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:42:04.916757 | orchestrator | 2025-06-01 04:42:04.916761 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-01 04:42:04.916766 | orchestrator | Sunday 01 June 2025 04:41:56 +0000 (0:00:02.015) 0:00:53.062 *********** 2025-06-01 04:42:04.916770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-01 04:42:04.916776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:42:04.916780 | orchestrator | 2025-06-01 04:42:04.916785 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-01 04:42:04.916789 | orchestrator | Sunday 01 June 2025 04:41:57 +0000 (0:00:00.967) 0:00:54.029 *********** 2025-06-01 04:42:04.916794 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.916799 | orchestrator | 2025-06-01 04:42:04.916803 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-01 04:42:04.916807 | orchestrator | Sunday 01 June 2025 04:41:59 +0000 (0:00:01.565) 0:00:55.595 *********** 2025-06-01 04:42:04.916812 | orchestrator | changed: [testbed-manager] 2025-06-01 04:42:04.916816 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:42:04.916821 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:42:04.916825 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:42:04.916829 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:42:04.916834 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:42:04.916838 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:42:04.916843 | orchestrator | 2025-06-01 04:42:04.916847 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:42:04.916851 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916856 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916861 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916865 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916870 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916894 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916900 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:42:04.916904 | orchestrator | 2025-06-01 04:42:04.916908 | orchestrator | 2025-06-01 04:42:04.916913 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:42:04.916917 | orchestrator | Sunday 01 June 2025 04:42:01 +0000 (0:00:02.824) 0:00:58.420 *********** 2025-06-01 04:42:04.916922 | orchestrator | =============================================================================== 2025-06-01 04:42:04.916926 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.51s 2025-06-01 04:42:04.916930 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.65s 2025-06-01 04:42:04.916935 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.64s 2025-06-01 04:42:04.916939 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.12s 2025-06-01 04:42:04.916944 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.06s 2025-06-01 04:42:04.916948 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.82s 2025-06-01 04:42:04.916955 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.65s 2025-06-01 04:42:04.916959 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.50s 2025-06-01 04:42:04.916964 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.02s 2025-06-01 04:42:04.916968 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.87s 2025-06-01 04:42:04.916972 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.58s 2025-06-01 04:42:04.916980 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.57s 2025-06-01 04:42:04.916985 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.40s 2025-06-01 04:42:04.916989 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.32s 2025-06-01 04:42:04.916993 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.06s 2025-06-01 04:42:04.916998 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 0.97s 2025-06-01 04:42:04.917022 | orchestrator | 2025-06-01 04:42:04 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:04.917099 | orchestrator | 2025-06-01 04:42:04 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:04.917971 | orchestrator | 2025-06-01 04:42:04 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:04.918434 | orchestrator | 2025-06-01 04:42:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:07.969436 | orchestrator | 2025-06-01 04:42:07 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:07.970711 | orchestrator | 2025-06-01 04:42:07 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:07.971753 | orchestrator | 2025-06-01 04:42:07 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:07.974897 | orchestrator | 2025-06-01 04:42:07 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:07.974983 | orchestrator | 2025-06-01 04:42:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:11.012384 | orchestrator | 2025-06-01 04:42:11 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:11.013898 | orchestrator | 2025-06-01 04:42:11 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:11.015877 | orchestrator | 2025-06-01 04:42:11 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:11.018139 | orchestrator | 2025-06-01 04:42:11 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:11.018172 | orchestrator | 2025-06-01 04:42:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:14.063352 | orchestrator | 2025-06-01 04:42:14 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:14.063615 | orchestrator | 2025-06-01 04:42:14 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:14.064701 | orchestrator | 2025-06-01 04:42:14 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:14.065879 | orchestrator | 2025-06-01 04:42:14 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:14.065916 | orchestrator | 2025-06-01 04:42:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:17.106879 | orchestrator | 2025-06-01 04:42:17 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:17.106996 | orchestrator | 2025-06-01 04:42:17 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:17.107187 | orchestrator | 2025-06-01 04:42:17 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:17.107802 | orchestrator | 2025-06-01 04:42:17 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:17.107831 | orchestrator | 2025-06-01 04:42:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:20.150971 | orchestrator | 2025-06-01 04:42:20 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:20.153165 | orchestrator | 2025-06-01 04:42:20 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:20.156114 | orchestrator | 2025-06-01 04:42:20 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:20.159048 | orchestrator | 2025-06-01 04:42:20 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:20.159098 | orchestrator | 2025-06-01 04:42:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:23.200159 | orchestrator | 2025-06-01 04:42:23 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:23.201776 | orchestrator | 2025-06-01 04:42:23 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:23.203859 | orchestrator | 2025-06-01 04:42:23 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:23.205303 | orchestrator | 2025-06-01 04:42:23 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:23.205343 | orchestrator | 2025-06-01 04:42:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:26.250320 | orchestrator | 2025-06-01 04:42:26 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:26.252917 | orchestrator | 2025-06-01 04:42:26 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:26.252965 | orchestrator | 2025-06-01 04:42:26 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:26.252974 | orchestrator | 2025-06-01 04:42:26 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:26.252982 | orchestrator | 2025-06-01 04:42:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:29.302966 | orchestrator | 2025-06-01 04:42:29 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:29.316171 | orchestrator | 2025-06-01 04:42:29 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:29.332759 | orchestrator | 2025-06-01 04:42:29 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:29.341866 | orchestrator | 2025-06-01 04:42:29 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:29.341946 | orchestrator | 2025-06-01 04:42:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:32.387948 | orchestrator | 2025-06-01 04:42:32 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:32.390006 | orchestrator | 2025-06-01 04:42:32 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:32.391241 | orchestrator | 2025-06-01 04:42:32 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:32.393244 | orchestrator | 2025-06-01 04:42:32 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:32.393307 | orchestrator | 2025-06-01 04:42:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:35.446788 | orchestrator | 2025-06-01 04:42:35 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:35.448268 | orchestrator | 2025-06-01 04:42:35 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:35.450143 | orchestrator | 2025-06-01 04:42:35 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:35.451512 | orchestrator | 2025-06-01 04:42:35 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:35.451624 | orchestrator | 2025-06-01 04:42:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:38.487436 | orchestrator | 2025-06-01 04:42:38 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state STARTED 2025-06-01 04:42:38.489313 | orchestrator | 2025-06-01 04:42:38 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:38.490930 | orchestrator | 2025-06-01 04:42:38 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:38.492198 | orchestrator | 2025-06-01 04:42:38 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:38.492389 | orchestrator | 2025-06-01 04:42:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:41.536259 | orchestrator | 2025-06-01 04:42:41 | INFO  | Task 68e67ba1-b6ee-48d2-af5c-7bca3fef73b4 is in state SUCCESS 2025-06-01 04:42:41.536874 | orchestrator | 2025-06-01 04:42:41 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:41.539407 | orchestrator | 2025-06-01 04:42:41 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:41.541937 | orchestrator | 2025-06-01 04:42:41 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:41.542235 | orchestrator | 2025-06-01 04:42:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:44.584740 | orchestrator | 2025-06-01 04:42:44 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:44.587950 | orchestrator | 2025-06-01 04:42:44 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:44.590641 | orchestrator | 2025-06-01 04:42:44 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:44.590706 | orchestrator | 2025-06-01 04:42:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:47.635104 | orchestrator | 2025-06-01 04:42:47 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:47.639961 | orchestrator | 2025-06-01 04:42:47 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:47.641275 | orchestrator | 2025-06-01 04:42:47 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:47.641357 | orchestrator | 2025-06-01 04:42:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:50.675275 | orchestrator | 2025-06-01 04:42:50 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:50.676230 | orchestrator | 2025-06-01 04:42:50 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:50.677724 | orchestrator | 2025-06-01 04:42:50 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:50.677768 | orchestrator | 2025-06-01 04:42:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:53.723798 | orchestrator | 2025-06-01 04:42:53 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:53.726832 | orchestrator | 2025-06-01 04:42:53 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:53.728242 | orchestrator | 2025-06-01 04:42:53 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:53.728273 | orchestrator | 2025-06-01 04:42:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:56.766908 | orchestrator | 2025-06-01 04:42:56 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:56.767250 | orchestrator | 2025-06-01 04:42:56 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:56.769467 | orchestrator | 2025-06-01 04:42:56 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:56.769822 | orchestrator | 2025-06-01 04:42:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:42:59.811165 | orchestrator | 2025-06-01 04:42:59 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:42:59.812722 | orchestrator | 2025-06-01 04:42:59 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:42:59.815249 | orchestrator | 2025-06-01 04:42:59 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:42:59.815387 | orchestrator | 2025-06-01 04:42:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:02.873900 | orchestrator | 2025-06-01 04:43:02 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:02.874110 | orchestrator | 2025-06-01 04:43:02 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:43:02.874260 | orchestrator | 2025-06-01 04:43:02 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:02.874285 | orchestrator | 2025-06-01 04:43:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:05.907250 | orchestrator | 2025-06-01 04:43:05 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:05.908919 | orchestrator | 2025-06-01 04:43:05 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:43:05.911648 | orchestrator | 2025-06-01 04:43:05 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:05.911718 | orchestrator | 2025-06-01 04:43:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:08.953123 | orchestrator | 2025-06-01 04:43:08 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:08.954407 | orchestrator | 2025-06-01 04:43:08 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:43:08.956830 | orchestrator | 2025-06-01 04:43:08 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:08.957068 | orchestrator | 2025-06-01 04:43:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:11.995303 | orchestrator | 2025-06-01 04:43:11 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:11.997288 | orchestrator | 2025-06-01 04:43:11 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state STARTED 2025-06-01 04:43:11.998165 | orchestrator | 2025-06-01 04:43:11 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:11.998315 | orchestrator | 2025-06-01 04:43:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:15.037392 | orchestrator | 2025-06-01 04:43:15 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:15.041197 | orchestrator | 2025-06-01 04:43:15 | INFO  | Task 4a31433b-091d-4fb3-bb57-9a4f51d334d0 is in state SUCCESS 2025-06-01 04:43:15.044398 | orchestrator | 2025-06-01 04:43:15.044449 | orchestrator | 2025-06-01 04:43:15.044463 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-01 04:43:15.044475 | orchestrator | 2025-06-01 04:43:15.044486 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-01 04:43:15.044497 | orchestrator | Sunday 01 June 2025 04:41:24 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-06-01 04:43:15.044509 | orchestrator | ok: [testbed-manager] 2025-06-01 04:43:15.044547 | orchestrator | 2025-06-01 04:43:15.044560 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-01 04:43:15.044571 | orchestrator | Sunday 01 June 2025 04:41:24 +0000 (0:00:00.699) 0:00:00.869 *********** 2025-06-01 04:43:15.044583 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-01 04:43:15.044595 | orchestrator | 2025-06-01 04:43:15.044606 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-01 04:43:15.044617 | orchestrator | Sunday 01 June 2025 04:41:25 +0000 (0:00:00.702) 0:00:01.572 *********** 2025-06-01 04:43:15.044628 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.044639 | orchestrator | 2025-06-01 04:43:15.044649 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-01 04:43:15.044660 | orchestrator | Sunday 01 June 2025 04:41:26 +0000 (0:00:01.152) 0:00:02.724 *********** 2025-06-01 04:43:15.044671 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-01 04:43:15.044682 | orchestrator | ok: [testbed-manager] 2025-06-01 04:43:15.044693 | orchestrator | 2025-06-01 04:43:15.044704 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-01 04:43:15.044715 | orchestrator | Sunday 01 June 2025 04:42:26 +0000 (0:00:59.999) 0:01:02.724 *********** 2025-06-01 04:43:15.044725 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.044736 | orchestrator | 2025-06-01 04:43:15.044747 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:43:15.044758 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:15.044771 | orchestrator | 2025-06-01 04:43:15.044782 | orchestrator | 2025-06-01 04:43:15.044793 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:43:15.044805 | orchestrator | Sunday 01 June 2025 04:42:40 +0000 (0:00:14.081) 0:01:16.806 *********** 2025-06-01 04:43:15.044816 | orchestrator | =============================================================================== 2025-06-01 04:43:15.044827 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.00s 2025-06-01 04:43:15.044837 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 14.08s 2025-06-01 04:43:15.044848 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.15s 2025-06-01 04:43:15.044861 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.70s 2025-06-01 04:43:15.044886 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.70s 2025-06-01 04:43:15.044901 | orchestrator | 2025-06-01 04:43:15.044914 | orchestrator | 2025-06-01 04:43:15.044926 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-01 04:43:15.044940 | orchestrator | 2025-06-01 04:43:15.044952 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-01 04:43:15.044965 | orchestrator | Sunday 01 June 2025 04:40:57 +0000 (0:00:00.250) 0:00:00.250 *********** 2025-06-01 04:43:15.044978 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:43:15.044992 | orchestrator | 2025-06-01 04:43:15.045005 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-01 04:43:15.045019 | orchestrator | Sunday 01 June 2025 04:40:58 +0000 (0:00:01.057) 0:00:01.307 *********** 2025-06-01 04:43:15.045031 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045059 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045072 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045085 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045097 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045110 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045123 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045135 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045148 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045161 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 04:43:15.045174 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045187 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045201 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045215 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045226 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 04:43:15.045238 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045262 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045274 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045285 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045297 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045307 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 04:43:15.045318 | orchestrator | 2025-06-01 04:43:15.045329 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-01 04:43:15.045340 | orchestrator | Sunday 01 June 2025 04:41:02 +0000 (0:00:03.858) 0:00:05.165 *********** 2025-06-01 04:43:15.045351 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:43:15.045363 | orchestrator | 2025-06-01 04:43:15.045374 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-01 04:43:15.045385 | orchestrator | Sunday 01 June 2025 04:41:03 +0000 (0:00:01.362) 0:00:06.528 *********** 2025-06-01 04:43:15.045400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045416 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045500 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.045546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045603 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.045774 | orchestrator | 2025-06-01 04:43:15.045785 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-01 04:43:15.045796 | orchestrator | Sunday 01 June 2025 04:41:08 +0000 (0:00:04.742) 0:00:11.270 *********** 2025-06-01 04:43:15.045821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.045834 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.045845 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.045863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.045879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.045891 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:43:15.045902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.045914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.045925 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:43:15.045937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.045962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.045974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.045991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046097 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:43:15.046121 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:43:15.046148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046204 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:43:15.046222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046296 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:43:15.046312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046372 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:43:15.046390 | orchestrator | 2025-06-01 04:43:15.046407 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-01 04:43:15.046424 | orchestrator | Sunday 01 June 2025 04:41:09 +0000 (0:00:01.436) 0:00:12.706 *********** 2025-06-01 04:43:15.046442 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046461 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046560 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:43:15.046578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046612 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:43:15.046635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046669 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:43:15.046681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046771 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:43:15.046783 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:43:15.046794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046841 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:43:15.046852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 04:43:15.046864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.046887 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:43:15.046898 | orchestrator | 2025-06-01 04:43:15.046909 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-01 04:43:15.046920 | orchestrator | Sunday 01 June 2025 04:41:11 +0000 (0:00:02.008) 0:00:14.715 *********** 2025-06-01 04:43:15.046931 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:43:15.046942 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:43:15.046953 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:43:15.046964 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:43:15.046975 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:43:15.046985 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:43:15.046996 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:43:15.047007 | orchestrator | 2025-06-01 04:43:15.047018 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-01 04:43:15.047029 | orchestrator | Sunday 01 June 2025 04:41:13 +0000 (0:00:01.145) 0:00:15.861 *********** 2025-06-01 04:43:15.047040 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:43:15.047051 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:43:15.047061 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:43:15.047072 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:43:15.047083 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:43:15.047093 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:43:15.047104 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:43:15.047115 | orchestrator | 2025-06-01 04:43:15.047126 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-01 04:43:15.047137 | orchestrator | Sunday 01 June 2025 04:41:14 +0000 (0:00:01.377) 0:00:17.239 *********** 2025-06-01 04:43:15.047148 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047210 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047281 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.047329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.047483 | orchestrator | 2025-06-01 04:43:15.047494 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-01 04:43:15.047505 | orchestrator | Sunday 01 June 2025 04:41:19 +0000 (0:00:05.229) 0:00:22.468 *********** 2025-06-01 04:43:15.047517 | orchestrator | [WARNING]: Skipped 2025-06-01 04:43:15.047549 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-01 04:43:15.047561 | orchestrator | to this access issue: 2025-06-01 04:43:15.047572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-01 04:43:15.047583 | orchestrator | directory 2025-06-01 04:43:15.047594 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:43:15.047605 | orchestrator | 2025-06-01 04:43:15.047616 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-01 04:43:15.047626 | orchestrator | Sunday 01 June 2025 04:41:21 +0000 (0:00:01.540) 0:00:24.008 *********** 2025-06-01 04:43:15.047637 | orchestrator | [WARNING]: Skipped 2025-06-01 04:43:15.047655 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-01 04:43:15.047671 | orchestrator | to this access issue: 2025-06-01 04:43:15.047682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-01 04:43:15.047693 | orchestrator | directory 2025-06-01 04:43:15.047704 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:43:15.047715 | orchestrator | 2025-06-01 04:43:15.047726 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-01 04:43:15.047737 | orchestrator | Sunday 01 June 2025 04:41:22 +0000 (0:00:01.354) 0:00:25.362 *********** 2025-06-01 04:43:15.047748 | orchestrator | [WARNING]: Skipped 2025-06-01 04:43:15.047759 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-01 04:43:15.047769 | orchestrator | to this access issue: 2025-06-01 04:43:15.047780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-01 04:43:15.047791 | orchestrator | directory 2025-06-01 04:43:15.047802 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:43:15.047812 | orchestrator | 2025-06-01 04:43:15.047823 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-01 04:43:15.047834 | orchestrator | Sunday 01 June 2025 04:41:23 +0000 (0:00:00.791) 0:00:26.154 *********** 2025-06-01 04:43:15.047845 | orchestrator | [WARNING]: Skipped 2025-06-01 04:43:15.047856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-01 04:43:15.047867 | orchestrator | to this access issue: 2025-06-01 04:43:15.047877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-01 04:43:15.047888 | orchestrator | directory 2025-06-01 04:43:15.047899 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:43:15.047910 | orchestrator | 2025-06-01 04:43:15.047920 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-01 04:43:15.047931 | orchestrator | Sunday 01 June 2025 04:41:24 +0000 (0:00:00.770) 0:00:26.924 *********** 2025-06-01 04:43:15.047942 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.047953 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.047964 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.047974 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.047985 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.047996 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.048006 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.048017 | orchestrator | 2025-06-01 04:43:15.048028 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-01 04:43:15.048039 | orchestrator | Sunday 01 June 2025 04:41:27 +0000 (0:00:03.315) 0:00:30.240 *********** 2025-06-01 04:43:15.048050 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048061 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048072 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048100 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048111 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048122 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 04:43:15.048132 | orchestrator | 2025-06-01 04:43:15.048144 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-01 04:43:15.048154 | orchestrator | Sunday 01 June 2025 04:41:30 +0000 (0:00:02.870) 0:00:33.110 *********** 2025-06-01 04:43:15.048173 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.048184 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.048195 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.048205 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.048216 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.048227 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.048238 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.048248 | orchestrator | 2025-06-01 04:43:15.048259 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-01 04:43:15.048270 | orchestrator | Sunday 01 June 2025 04:41:32 +0000 (0:00:02.196) 0:00:35.306 *********** 2025-06-01 04:43:15.048281 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048333 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048373 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048389 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048420 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048432 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048444 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048456 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048492 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048504 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:43:15.048564 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048576 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048588 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048599 | orchestrator | 2025-06-01 04:43:15.048610 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-01 04:43:15.048622 | orchestrator | Sunday 01 June 2025 04:41:34 +0000 (0:00:02.417) 0:00:37.724 *********** 2025-06-01 04:43:15.048637 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048680 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048691 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048702 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048713 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 04:43:15.048724 | orchestrator | 2025-06-01 04:43:15.048735 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-01 04:43:15.048746 | orchestrator | Sunday 01 June 2025 04:41:36 +0000 (0:00:02.051) 0:00:39.776 *********** 2025-06-01 04:43:15.048757 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048800 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048811 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048822 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 04:43:15.048833 | orchestrator | 2025-06-01 04:43:15.048844 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-01 04:43:15.048855 | orchestrator | Sunday 01 June 2025 04:41:39 +0000 (0:00:02.114) 0:00:41.891 *********** 2025-06-01 04:43:15.048866 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.048943 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.048995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.049024 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 04:43:15.049079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:43:15.049160 | orchestrator | 2025-06-01 04:43:15.049176 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-01 04:43:15.049187 | orchestrator | Sunday 01 June 2025 04:41:42 +0000 (0:00:03.472) 0:00:45.364 *********** 2025-06-01 04:43:15.049198 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.049209 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.049220 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.049231 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.049242 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.049252 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.049263 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.049274 | orchestrator | 2025-06-01 04:43:15.049285 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-01 04:43:15.049296 | orchestrator | Sunday 01 June 2025 04:41:44 +0000 (0:00:01.692) 0:00:47.056 *********** 2025-06-01 04:43:15.049307 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.049318 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.049328 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.049339 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.049350 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.049361 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.049371 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.049382 | orchestrator | 2025-06-01 04:43:15.049393 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049404 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:01.452) 0:00:48.509 *********** 2025-06-01 04:43:15.049415 | orchestrator | 2025-06-01 04:43:15.049426 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049437 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:00.073) 0:00:48.582 *********** 2025-06-01 04:43:15.049447 | orchestrator | 2025-06-01 04:43:15.049458 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049469 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:00.108) 0:00:48.691 *********** 2025-06-01 04:43:15.049480 | orchestrator | 2025-06-01 04:43:15.049491 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049502 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:00.066) 0:00:48.757 *********** 2025-06-01 04:43:15.049513 | orchestrator | 2025-06-01 04:43:15.049574 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049593 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:00.065) 0:00:48.822 *********** 2025-06-01 04:43:15.049604 | orchestrator | 2025-06-01 04:43:15.049615 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049626 | orchestrator | Sunday 01 June 2025 04:41:46 +0000 (0:00:00.187) 0:00:49.010 *********** 2025-06-01 04:43:15.049637 | orchestrator | 2025-06-01 04:43:15.049648 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 04:43:15.049659 | orchestrator | Sunday 01 June 2025 04:41:46 +0000 (0:00:00.087) 0:00:49.097 *********** 2025-06-01 04:43:15.049670 | orchestrator | 2025-06-01 04:43:15.049680 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-01 04:43:15.049696 | orchestrator | Sunday 01 June 2025 04:41:46 +0000 (0:00:00.085) 0:00:49.183 *********** 2025-06-01 04:43:15.049708 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.049718 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.049729 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.049740 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.049751 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.049761 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.049772 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.049783 | orchestrator | 2025-06-01 04:43:15.049794 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-01 04:43:15.049805 | orchestrator | Sunday 01 June 2025 04:42:27 +0000 (0:00:40.992) 0:01:30.176 *********** 2025-06-01 04:43:15.049816 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.049826 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.049837 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.049848 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.049859 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.049869 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.049880 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.049891 | orchestrator | 2025-06-01 04:43:15.049901 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-01 04:43:15.049912 | orchestrator | Sunday 01 June 2025 04:43:02 +0000 (0:00:35.210) 0:02:05.387 *********** 2025-06-01 04:43:15.049923 | orchestrator | ok: [testbed-manager] 2025-06-01 04:43:15.049934 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:43:15.049945 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:43:15.049956 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:43:15.049966 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:43:15.049977 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:43:15.049988 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:43:15.049999 | orchestrator | 2025-06-01 04:43:15.050010 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-01 04:43:15.050058 | orchestrator | Sunday 01 June 2025 04:43:04 +0000 (0:00:02.013) 0:02:07.400 *********** 2025-06-01 04:43:15.050071 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:15.050082 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:15.050092 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:43:15.050103 | orchestrator | changed: [testbed-manager] 2025-06-01 04:43:15.050113 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:15.050124 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:43:15.050134 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:43:15.050143 | orchestrator | 2025-06-01 04:43:15.050153 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:43:15.050163 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050174 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050191 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050208 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050218 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050228 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050237 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 04:43:15.050247 | orchestrator | 2025-06-01 04:43:15.050257 | orchestrator | 2025-06-01 04:43:15.050267 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:43:15.050277 | orchestrator | Sunday 01 June 2025 04:43:14 +0000 (0:00:09.532) 0:02:16.932 *********** 2025-06-01 04:43:15.050287 | orchestrator | =============================================================================== 2025-06-01 04:43:15.050296 | orchestrator | common : Restart fluentd container ------------------------------------- 40.99s 2025-06-01 04:43:15.050306 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.21s 2025-06-01 04:43:15.050315 | orchestrator | common : Restart cron container ----------------------------------------- 9.53s 2025-06-01 04:43:15.050325 | orchestrator | common : Copying over config.json files for services -------------------- 5.23s 2025-06-01 04:43:15.050334 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.74s 2025-06-01 04:43:15.050344 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.86s 2025-06-01 04:43:15.050353 | orchestrator | common : Check common containers ---------------------------------------- 3.47s 2025-06-01 04:43:15.050363 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.32s 2025-06-01 04:43:15.050373 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.87s 2025-06-01 04:43:15.050382 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.42s 2025-06-01 04:43:15.050392 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.20s 2025-06-01 04:43:15.050401 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.11s 2025-06-01 04:43:15.050411 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.05s 2025-06-01 04:43:15.050425 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.01s 2025-06-01 04:43:15.050435 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.01s 2025-06-01 04:43:15.050444 | orchestrator | common : Creating log volume -------------------------------------------- 1.69s 2025-06-01 04:43:15.050454 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.54s 2025-06-01 04:43:15.050463 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.45s 2025-06-01 04:43:15.050473 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.44s 2025-06-01 04:43:15.050482 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.38s 2025-06-01 04:43:15.050492 | orchestrator | 2025-06-01 04:43:15 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:15.050502 | orchestrator | 2025-06-01 04:43:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:18.111246 | orchestrator | 2025-06-01 04:43:18 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:18.111863 | orchestrator | 2025-06-01 04:43:18 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:18.112692 | orchestrator | 2025-06-01 04:43:18 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:18.114004 | orchestrator | 2025-06-01 04:43:18 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:18.118103 | orchestrator | 2025-06-01 04:43:18 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:18.119047 | orchestrator | 2025-06-01 04:43:18 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state STARTED 2025-06-01 04:43:18.119078 | orchestrator | 2025-06-01 04:43:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:21.168724 | orchestrator | 2025-06-01 04:43:21 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:21.168830 | orchestrator | 2025-06-01 04:43:21 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:21.168846 | orchestrator | 2025-06-01 04:43:21 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:21.168858 | orchestrator | 2025-06-01 04:43:21 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:21.168869 | orchestrator | 2025-06-01 04:43:21 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:21.170906 | orchestrator | 2025-06-01 04:43:21 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state STARTED 2025-06-01 04:43:21.170935 | orchestrator | 2025-06-01 04:43:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:24.202108 | orchestrator | 2025-06-01 04:43:24 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:24.202342 | orchestrator | 2025-06-01 04:43:24 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:24.202910 | orchestrator | 2025-06-01 04:43:24 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:24.205021 | orchestrator | 2025-06-01 04:43:24 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:24.205953 | orchestrator | 2025-06-01 04:43:24 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:24.206411 | orchestrator | 2025-06-01 04:43:24 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state STARTED 2025-06-01 04:43:24.207869 | orchestrator | 2025-06-01 04:43:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:27.239408 | orchestrator | 2025-06-01 04:43:27 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:27.239512 | orchestrator | 2025-06-01 04:43:27 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:27.239684 | orchestrator | 2025-06-01 04:43:27 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:27.240018 | orchestrator | 2025-06-01 04:43:27 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:27.240817 | orchestrator | 2025-06-01 04:43:27 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:27.241752 | orchestrator | 2025-06-01 04:43:27 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state STARTED 2025-06-01 04:43:27.244951 | orchestrator | 2025-06-01 04:43:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:30.291652 | orchestrator | 2025-06-01 04:43:30 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:30.295737 | orchestrator | 2025-06-01 04:43:30 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:30.297032 | orchestrator | 2025-06-01 04:43:30 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:30.298153 | orchestrator | 2025-06-01 04:43:30 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:30.300090 | orchestrator | 2025-06-01 04:43:30 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:30.302674 | orchestrator | 2025-06-01 04:43:30 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state STARTED 2025-06-01 04:43:30.302720 | orchestrator | 2025-06-01 04:43:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:33.331186 | orchestrator | 2025-06-01 04:43:33 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:33.332678 | orchestrator | 2025-06-01 04:43:33 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:33.333599 | orchestrator | 2025-06-01 04:43:33 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:33.335300 | orchestrator | 2025-06-01 04:43:33 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:33.336321 | orchestrator | 2025-06-01 04:43:33 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:33.338489 | orchestrator | 2025-06-01 04:43:33 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state STARTED 2025-06-01 04:43:33.338581 | orchestrator | 2025-06-01 04:43:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:36.371341 | orchestrator | 2025-06-01 04:43:36 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:36.371692 | orchestrator | 2025-06-01 04:43:36 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:36.373825 | orchestrator | 2025-06-01 04:43:36 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:36.374393 | orchestrator | 2025-06-01 04:43:36 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:36.375210 | orchestrator | 2025-06-01 04:43:36 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:36.376725 | orchestrator | 2025-06-01 04:43:36 | INFO  | Task 0ae87003-e130-4ce9-a888-449a762ec8e7 is in state SUCCESS 2025-06-01 04:43:36.376751 | orchestrator | 2025-06-01 04:43:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:39.408097 | orchestrator | 2025-06-01 04:43:39 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:39.408488 | orchestrator | 2025-06-01 04:43:39 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:39.408917 | orchestrator | 2025-06-01 04:43:39 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:39.409780 | orchestrator | 2025-06-01 04:43:39 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:39.412337 | orchestrator | 2025-06-01 04:43:39 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:39.413225 | orchestrator | 2025-06-01 04:43:39 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:39.413374 | orchestrator | 2025-06-01 04:43:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:42.460356 | orchestrator | 2025-06-01 04:43:42 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:42.460593 | orchestrator | 2025-06-01 04:43:42 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:42.461623 | orchestrator | 2025-06-01 04:43:42 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:42.462139 | orchestrator | 2025-06-01 04:43:42 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:42.463743 | orchestrator | 2025-06-01 04:43:42 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:42.464327 | orchestrator | 2025-06-01 04:43:42 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:42.464516 | orchestrator | 2025-06-01 04:43:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:45.488233 | orchestrator | 2025-06-01 04:43:45 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:45.488369 | orchestrator | 2025-06-01 04:43:45 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:45.488903 | orchestrator | 2025-06-01 04:43:45 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:45.491349 | orchestrator | 2025-06-01 04:43:45 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:45.491915 | orchestrator | 2025-06-01 04:43:45 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:45.492857 | orchestrator | 2025-06-01 04:43:45 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:45.492877 | orchestrator | 2025-06-01 04:43:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:48.531920 | orchestrator | 2025-06-01 04:43:48 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:48.532366 | orchestrator | 2025-06-01 04:43:48 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:48.533983 | orchestrator | 2025-06-01 04:43:48 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:48.535500 | orchestrator | 2025-06-01 04:43:48 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:48.537244 | orchestrator | 2025-06-01 04:43:48 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:48.538615 | orchestrator | 2025-06-01 04:43:48 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:48.538638 | orchestrator | 2025-06-01 04:43:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:51.576163 | orchestrator | 2025-06-01 04:43:51 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:51.576338 | orchestrator | 2025-06-01 04:43:51 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state STARTED 2025-06-01 04:43:51.576843 | orchestrator | 2025-06-01 04:43:51 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:51.580413 | orchestrator | 2025-06-01 04:43:51 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:51.580877 | orchestrator | 2025-06-01 04:43:51 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:51.581449 | orchestrator | 2025-06-01 04:43:51 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:51.581489 | orchestrator | 2025-06-01 04:43:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:54.608428 | orchestrator | 2025-06-01 04:43:54 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:54.608800 | orchestrator | 2025-06-01 04:43:54 | INFO  | Task 662e1845-e153-4ec6-8242-5187aa65ee74 is in state SUCCESS 2025-06-01 04:43:54.610388 | orchestrator | 2025-06-01 04:43:54.610430 | orchestrator | 2025-06-01 04:43:54.610439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:43:54.610448 | orchestrator | 2025-06-01 04:43:54.610475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:43:54.610483 | orchestrator | Sunday 01 June 2025 04:43:21 +0000 (0:00:00.558) 0:00:00.558 *********** 2025-06-01 04:43:54.610491 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:43:54.610500 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:43:54.610507 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:43:54.610515 | orchestrator | 2025-06-01 04:43:54.610540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:43:54.610548 | orchestrator | Sunday 01 June 2025 04:43:21 +0000 (0:00:00.536) 0:00:01.095 *********** 2025-06-01 04:43:54.610558 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-01 04:43:54.610566 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-01 04:43:54.610574 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-01 04:43:54.610582 | orchestrator | 2025-06-01 04:43:54.610590 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-01 04:43:54.610597 | orchestrator | 2025-06-01 04:43:54.610605 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-01 04:43:54.610613 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.684) 0:00:01.779 *********** 2025-06-01 04:43:54.610620 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:43:54.610629 | orchestrator | 2025-06-01 04:43:54.610636 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-01 04:43:54.610643 | orchestrator | Sunday 01 June 2025 04:43:23 +0000 (0:00:01.145) 0:00:02.925 *********** 2025-06-01 04:43:54.610651 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-01 04:43:54.610659 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-01 04:43:54.610667 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-01 04:43:54.610675 | orchestrator | 2025-06-01 04:43:54.610683 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-01 04:43:54.610697 | orchestrator | Sunday 01 June 2025 04:43:24 +0000 (0:00:01.050) 0:00:03.976 *********** 2025-06-01 04:43:54.610704 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-01 04:43:54.610712 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-01 04:43:54.610720 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-01 04:43:54.610728 | orchestrator | 2025-06-01 04:43:54.610736 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-01 04:43:54.610744 | orchestrator | Sunday 01 June 2025 04:43:27 +0000 (0:00:02.344) 0:00:06.320 *********** 2025-06-01 04:43:54.610752 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:54.610760 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:54.610768 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:54.610776 | orchestrator | 2025-06-01 04:43:54.610784 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-01 04:43:54.610792 | orchestrator | Sunday 01 June 2025 04:43:28 +0000 (0:00:01.681) 0:00:08.002 *********** 2025-06-01 04:43:54.610797 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:54.610801 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:54.610806 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:54.610811 | orchestrator | 2025-06-01 04:43:54.610816 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:43:54.610821 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:54.610828 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:54.610832 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:54.610837 | orchestrator | 2025-06-01 04:43:54.610848 | orchestrator | 2025-06-01 04:43:54.610853 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:43:54.610858 | orchestrator | Sunday 01 June 2025 04:43:35 +0000 (0:00:06.979) 0:00:14.981 *********** 2025-06-01 04:43:54.610863 | orchestrator | =============================================================================== 2025-06-01 04:43:54.610871 | orchestrator | memcached : Restart memcached container --------------------------------- 6.98s 2025-06-01 04:43:54.610879 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.34s 2025-06-01 04:43:54.610887 | orchestrator | memcached : Check memcached container ----------------------------------- 1.68s 2025-06-01 04:43:54.610895 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.15s 2025-06-01 04:43:54.610902 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.05s 2025-06-01 04:43:54.610911 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-06-01 04:43:54.610918 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-06-01 04:43:54.610926 | orchestrator | 2025-06-01 04:43:54.610934 | orchestrator | 2025-06-01 04:43:54.610942 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:43:54.610950 | orchestrator | 2025-06-01 04:43:54.610958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:43:54.610966 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.477) 0:00:00.477 *********** 2025-06-01 04:43:54.610974 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:43:54.610982 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:43:54.610990 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:43:54.610998 | orchestrator | 2025-06-01 04:43:54.611006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:43:54.611028 | orchestrator | Sunday 01 June 2025 04:43:23 +0000 (0:00:00.486) 0:00:00.964 *********** 2025-06-01 04:43:54.611037 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-01 04:43:54.611046 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-01 04:43:54.611054 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-01 04:43:54.611063 | orchestrator | 2025-06-01 04:43:54.611072 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-01 04:43:54.611081 | orchestrator | 2025-06-01 04:43:54.611095 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-01 04:43:54.611107 | orchestrator | Sunday 01 June 2025 04:43:23 +0000 (0:00:00.682) 0:00:01.647 *********** 2025-06-01 04:43:54.611119 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:43:54.611131 | orchestrator | 2025-06-01 04:43:54.611143 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-01 04:43:54.611157 | orchestrator | Sunday 01 June 2025 04:43:24 +0000 (0:00:01.010) 0:00:02.658 *********** 2025-06-01 04:43:54.611169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611260 | orchestrator | 2025-06-01 04:43:54.611272 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-01 04:43:54.611282 | orchestrator | Sunday 01 June 2025 04:43:26 +0000 (0:00:01.484) 0:00:04.143 *********** 2025-06-01 04:43:54.611291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611356 | orchestrator | 2025-06-01 04:43:54.611364 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-01 04:43:54.611372 | orchestrator | Sunday 01 June 2025 04:43:28 +0000 (0:00:02.559) 0:00:06.702 *********** 2025-06-01 04:43:54.611380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611436 | orchestrator | 2025-06-01 04:43:54.611448 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-01 04:43:54.611453 | orchestrator | Sunday 01 June 2025 04:43:31 +0000 (0:00:02.667) 0:00:09.370 *********** 2025-06-01 04:43:54.611457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 04:43:54.611515 | orchestrator | 2025-06-01 04:43:54.611545 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 04:43:54.611553 | orchestrator | Sunday 01 June 2025 04:43:33 +0000 (0:00:01.773) 0:00:11.143 *********** 2025-06-01 04:43:54.611560 | orchestrator | 2025-06-01 04:43:54.611568 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 04:43:54.611579 | orchestrator | Sunday 01 June 2025 04:43:33 +0000 (0:00:00.050) 0:00:11.193 *********** 2025-06-01 04:43:54.611587 | orchestrator | 2025-06-01 04:43:54.611595 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 04:43:54.611602 | orchestrator | Sunday 01 June 2025 04:43:33 +0000 (0:00:00.064) 0:00:11.258 *********** 2025-06-01 04:43:54.611609 | orchestrator | 2025-06-01 04:43:54.611617 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-01 04:43:54.611625 | orchestrator | Sunday 01 June 2025 04:43:33 +0000 (0:00:00.058) 0:00:11.317 *********** 2025-06-01 04:43:54.611632 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:54.611641 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:54.611653 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:54.611661 | orchestrator | 2025-06-01 04:43:54.611669 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-01 04:43:54.611676 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:09.035) 0:00:20.352 *********** 2025-06-01 04:43:54.611684 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:43:54.611691 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:43:54.611698 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:43:54.611706 | orchestrator | 2025-06-01 04:43:54.611713 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:43:54.611721 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:54.611730 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:54.611737 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:43:54.611744 | orchestrator | 2025-06-01 04:43:54.611752 | orchestrator | 2025-06-01 04:43:54.611759 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:43:54.611766 | orchestrator | Sunday 01 June 2025 04:43:51 +0000 (0:00:08.725) 0:00:29.077 *********** 2025-06-01 04:43:54.611774 | orchestrator | =============================================================================== 2025-06-01 04:43:54.611785 | orchestrator | redis : Restart redis container ----------------------------------------- 9.04s 2025-06-01 04:43:54.611793 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.73s 2025-06-01 04:43:54.611800 | orchestrator | redis : Copying over redis config files --------------------------------- 2.67s 2025-06-01 04:43:54.611808 | orchestrator | redis : Copying over default config.json files -------------------------- 2.56s 2025-06-01 04:43:54.611816 | orchestrator | redis : Check redis containers ------------------------------------------ 1.77s 2025-06-01 04:43:54.611823 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.48s 2025-06-01 04:43:54.611830 | orchestrator | redis : include_tasks --------------------------------------------------- 1.01s 2025-06-01 04:43:54.611838 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-06-01 04:43:54.611845 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-06-01 04:43:54.611853 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.17s 2025-06-01 04:43:54.611901 | orchestrator | 2025-06-01 04:43:54 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:54.611964 | orchestrator | 2025-06-01 04:43:54 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:54.612579 | orchestrator | 2025-06-01 04:43:54 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:54.613326 | orchestrator | 2025-06-01 04:43:54 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:54.613343 | orchestrator | 2025-06-01 04:43:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:43:57.641520 | orchestrator | 2025-06-01 04:43:57 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:43:57.641758 | orchestrator | 2025-06-01 04:43:57 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:43:57.642187 | orchestrator | 2025-06-01 04:43:57 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:43:57.643029 | orchestrator | 2025-06-01 04:43:57 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:43:57.644704 | orchestrator | 2025-06-01 04:43:57 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:43:57.644786 | orchestrator | 2025-06-01 04:43:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:00.673497 | orchestrator | 2025-06-01 04:44:00 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:00.673698 | orchestrator | 2025-06-01 04:44:00 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:00.674627 | orchestrator | 2025-06-01 04:44:00 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:00.676080 | orchestrator | 2025-06-01 04:44:00 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:00.678187 | orchestrator | 2025-06-01 04:44:00 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:00.678223 | orchestrator | 2025-06-01 04:44:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:03.718868 | orchestrator | 2025-06-01 04:44:03 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:03.719279 | orchestrator | 2025-06-01 04:44:03 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:03.720478 | orchestrator | 2025-06-01 04:44:03 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:03.722723 | orchestrator | 2025-06-01 04:44:03 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:03.723938 | orchestrator | 2025-06-01 04:44:03 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:03.723959 | orchestrator | 2025-06-01 04:44:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:06.762569 | orchestrator | 2025-06-01 04:44:06 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:06.762680 | orchestrator | 2025-06-01 04:44:06 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:06.763796 | orchestrator | 2025-06-01 04:44:06 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:06.764168 | orchestrator | 2025-06-01 04:44:06 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:06.765965 | orchestrator | 2025-06-01 04:44:06 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:06.766071 | orchestrator | 2025-06-01 04:44:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:09.798009 | orchestrator | 2025-06-01 04:44:09 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:09.802411 | orchestrator | 2025-06-01 04:44:09 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:09.802494 | orchestrator | 2025-06-01 04:44:09 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:09.803202 | orchestrator | 2025-06-01 04:44:09 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:09.805431 | orchestrator | 2025-06-01 04:44:09 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:09.807215 | orchestrator | 2025-06-01 04:44:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:12.853424 | orchestrator | 2025-06-01 04:44:12 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:12.853773 | orchestrator | 2025-06-01 04:44:12 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:12.853802 | orchestrator | 2025-06-01 04:44:12 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:12.854333 | orchestrator | 2025-06-01 04:44:12 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:12.855439 | orchestrator | 2025-06-01 04:44:12 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:12.855511 | orchestrator | 2025-06-01 04:44:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:15.889657 | orchestrator | 2025-06-01 04:44:15 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:15.889809 | orchestrator | 2025-06-01 04:44:15 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:15.890677 | orchestrator | 2025-06-01 04:44:15 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:15.891028 | orchestrator | 2025-06-01 04:44:15 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:15.891814 | orchestrator | 2025-06-01 04:44:15 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:15.891845 | orchestrator | 2025-06-01 04:44:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:18.924049 | orchestrator | 2025-06-01 04:44:18 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:18.924273 | orchestrator | 2025-06-01 04:44:18 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:18.925193 | orchestrator | 2025-06-01 04:44:18 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:18.926702 | orchestrator | 2025-06-01 04:44:18 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:18.927209 | orchestrator | 2025-06-01 04:44:18 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:18.927233 | orchestrator | 2025-06-01 04:44:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:21.959307 | orchestrator | 2025-06-01 04:44:21 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:21.960407 | orchestrator | 2025-06-01 04:44:21 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state STARTED 2025-06-01 04:44:21.964588 | orchestrator | 2025-06-01 04:44:21 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:21.964644 | orchestrator | 2025-06-01 04:44:21 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:21.964657 | orchestrator | 2025-06-01 04:44:21 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:21.964670 | orchestrator | 2025-06-01 04:44:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:25.013266 | orchestrator | 2025-06-01 04:44:25 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:25.013633 | orchestrator | 2025-06-01 04:44:25 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:25.018459 | orchestrator | 2025-06-01 04:44:25.018504 | orchestrator | 2025-06-01 04:44:25.018516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:44:25.018553 | orchestrator | 2025-06-01 04:44:25.018565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:44:25.018577 | orchestrator | Sunday 01 June 2025 04:43:21 +0000 (0:00:00.498) 0:00:00.498 *********** 2025-06-01 04:44:25.018588 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:25.018600 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:25.018619 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:25.018631 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:25.018642 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:25.018653 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:25.018683 | orchestrator | 2025-06-01 04:44:25.018695 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:44:25.018706 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.979) 0:00:01.478 *********** 2025-06-01 04:44:25.018717 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 04:44:25.018728 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 04:44:25.018739 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 04:44:25.018750 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 04:44:25.018760 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 04:44:25.018771 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 04:44:25.018782 | orchestrator | 2025-06-01 04:44:25.018792 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-01 04:44:25.018803 | orchestrator | 2025-06-01 04:44:25.018814 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-01 04:44:25.018829 | orchestrator | Sunday 01 June 2025 04:43:23 +0000 (0:00:01.013) 0:00:02.492 *********** 2025-06-01 04:44:25.018842 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:44:25.018854 | orchestrator | 2025-06-01 04:44:25.018865 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 04:44:25.018876 | orchestrator | Sunday 01 June 2025 04:43:26 +0000 (0:00:02.378) 0:00:04.870 *********** 2025-06-01 04:44:25.018887 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-01 04:44:25.018899 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-01 04:44:25.018910 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-01 04:44:25.018921 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-01 04:44:25.018931 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-01 04:44:25.018942 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-01 04:44:25.018953 | orchestrator | 2025-06-01 04:44:25.018964 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 04:44:25.018975 | orchestrator | Sunday 01 June 2025 04:43:28 +0000 (0:00:01.770) 0:00:06.641 *********** 2025-06-01 04:44:25.018986 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-01 04:44:25.018996 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-01 04:44:25.019007 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-01 04:44:25.019018 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-01 04:44:25.019028 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-01 04:44:25.019039 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-01 04:44:25.019052 | orchestrator | 2025-06-01 04:44:25.019065 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 04:44:25.019078 | orchestrator | Sunday 01 June 2025 04:43:29 +0000 (0:00:01.517) 0:00:08.158 *********** 2025-06-01 04:44:25.019092 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-01 04:44:25.019105 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:25.019117 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-01 04:44:25.019128 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:25.019138 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-01 04:44:25.019149 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:25.019159 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-01 04:44:25.019170 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:25.019181 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-01 04:44:25.019191 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:25.019207 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-01 04:44:25.019218 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:25.019229 | orchestrator | 2025-06-01 04:44:25.019239 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-01 04:44:25.019250 | orchestrator | Sunday 01 June 2025 04:43:31 +0000 (0:00:01.446) 0:00:09.605 *********** 2025-06-01 04:44:25.019261 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:25.019272 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:25.019282 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:25.019293 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:25.019303 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:25.019314 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:25.019325 | orchestrator | 2025-06-01 04:44:25.019335 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-01 04:44:25.019346 | orchestrator | Sunday 01 June 2025 04:43:31 +0000 (0:00:00.621) 0:00:10.227 *********** 2025-06-01 04:44:25.019380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019620 | orchestrator | 2025-06-01 04:44:25.019631 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-01 04:44:25.019643 | orchestrator | Sunday 01 June 2025 04:43:33 +0000 (0:00:02.008) 0:00:12.235 *********** 2025-06-01 04:44:25.019659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019845 | orchestrator | 2025-06-01 04:44:25.019856 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-01 04:44:25.019867 | orchestrator | Sunday 01 June 2025 04:43:37 +0000 (0:00:04.101) 0:00:16.336 *********** 2025-06-01 04:44:25.019878 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:25.019889 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:25.019900 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:25.019911 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:25.019922 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:25.019932 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:25.019943 | orchestrator | 2025-06-01 04:44:25.019954 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-01 04:44:25.019965 | orchestrator | Sunday 01 June 2025 04:43:39 +0000 (0:00:01.496) 0:00:17.833 *********** 2025-06-01 04:44:25.019976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.019994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 04:44:25.020147 | orchestrator | 2025-06-01 04:44:25.020159 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 04:44:25.020176 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:03.124) 0:00:20.958 *********** 2025-06-01 04:44:25.020188 | orchestrator | 2025-06-01 04:44:25.020198 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 04:44:25.020210 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:00.193) 0:00:21.151 *********** 2025-06-01 04:44:25.020220 | orchestrator | 2025-06-01 04:44:25.020231 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 04:44:25.020242 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:00.255) 0:00:21.406 *********** 2025-06-01 04:44:25.020253 | orchestrator | 2025-06-01 04:44:25.020263 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 04:44:25.020274 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.242) 0:00:21.650 *********** 2025-06-01 04:44:25.020285 | orchestrator | 2025-06-01 04:44:25.020296 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 04:44:25.020306 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.269) 0:00:21.920 *********** 2025-06-01 04:44:25.020317 | orchestrator | 2025-06-01 04:44:25.020328 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 04:44:25.020339 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.141) 0:00:22.061 *********** 2025-06-01 04:44:25.020349 | orchestrator | 2025-06-01 04:44:25.020360 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-01 04:44:25.020371 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.472) 0:00:22.533 *********** 2025-06-01 04:44:25.020382 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:25.020393 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:25.020404 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:25.020414 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:25.020425 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:25.020436 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:25.020447 | orchestrator | 2025-06-01 04:44:25.020458 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-01 04:44:25.020469 | orchestrator | Sunday 01 June 2025 04:43:49 +0000 (0:00:05.913) 0:00:28.447 *********** 2025-06-01 04:44:25.020479 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:25.020490 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:25.020501 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:25.020512 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:25.020541 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:25.020552 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:25.020563 | orchestrator | 2025-06-01 04:44:25.020574 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-01 04:44:25.020585 | orchestrator | Sunday 01 June 2025 04:43:51 +0000 (0:00:01.667) 0:00:30.115 *********** 2025-06-01 04:44:25.020595 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:25.020606 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:25.020623 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:25.020635 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:25.020646 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:25.020656 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:25.020667 | orchestrator | 2025-06-01 04:44:25.020678 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-01 04:44:25.020689 | orchestrator | Sunday 01 June 2025 04:44:00 +0000 (0:00:09.182) 0:00:39.297 *********** 2025-06-01 04:44:25.020699 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-01 04:44:25.020710 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-01 04:44:25.020721 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-01 04:44:25.020732 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-01 04:44:25.020749 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-01 04:44:25.020766 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-01 04:44:25.020777 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-01 04:44:25.020788 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-01 04:44:25.020803 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-01 04:44:25.020814 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-01 04:44:25.020825 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-01 04:44:25.020836 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-01 04:44:25.020846 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 04:44:25.020857 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 04:44:25.020868 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 04:44:25.020879 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 04:44:25.020889 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 04:44:25.020900 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 04:44:25.020911 | orchestrator | 2025-06-01 04:44:25.020922 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-01 04:44:25.020932 | orchestrator | Sunday 01 June 2025 04:44:08 +0000 (0:00:07.569) 0:00:46.867 *********** 2025-06-01 04:44:25.020943 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-01 04:44:25.020954 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:25.020965 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-01 04:44:25.020976 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:25.020987 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-01 04:44:25.020998 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:25.021009 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-01 04:44:25.021020 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-01 04:44:25.021031 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-01 04:44:25.021041 | orchestrator | 2025-06-01 04:44:25.021052 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-01 04:44:25.021063 | orchestrator | Sunday 01 June 2025 04:44:10 +0000 (0:00:02.185) 0:00:49.053 *********** 2025-06-01 04:44:25.021074 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-01 04:44:25.021085 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:25.021096 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-01 04:44:25.021107 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:25.021118 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-01 04:44:25.021128 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:25.021139 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-01 04:44:25.021150 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-01 04:44:25.021161 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-01 04:44:25.021177 | orchestrator | 2025-06-01 04:44:25.021188 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-01 04:44:25.021199 | orchestrator | Sunday 01 June 2025 04:44:14 +0000 (0:00:03.643) 0:00:52.696 *********** 2025-06-01 04:44:25.021210 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:25.021221 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:25.021231 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:25.021242 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:25.021253 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:25.021263 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:25.021274 | orchestrator | 2025-06-01 04:44:25.021285 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:44:25.021296 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 04:44:25.021307 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 04:44:25.021319 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 04:44:25.021330 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 04:44:25.021341 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 04:44:25.021357 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 04:44:25.021369 | orchestrator | 2025-06-01 04:44:25.021380 | orchestrator | 2025-06-01 04:44:25.021390 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:44:25.021401 | orchestrator | Sunday 01 June 2025 04:44:21 +0000 (0:00:07.671) 0:01:00.367 *********** 2025-06-01 04:44:25.021417 | orchestrator | =============================================================================== 2025-06-01 04:44:25.021428 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.85s 2025-06-01 04:44:25.021438 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.57s 2025-06-01 04:44:25.021449 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.91s 2025-06-01 04:44:25.021460 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.10s 2025-06-01 04:44:25.021470 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.64s 2025-06-01 04:44:25.021482 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.12s 2025-06-01 04:44:25.021492 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.38s 2025-06-01 04:44:25.021503 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.19s 2025-06-01 04:44:25.021513 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.01s 2025-06-01 04:44:25.021541 | orchestrator | module-load : Load modules ---------------------------------------------- 1.77s 2025-06-01 04:44:25.021552 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.67s 2025-06-01 04:44:25.021563 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.58s 2025-06-01 04:44:25.021574 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.52s 2025-06-01 04:44:25.021585 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.50s 2025-06-01 04:44:25.021596 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.45s 2025-06-01 04:44:25.021606 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2025-06-01 04:44:25.021623 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2025-06-01 04:44:25.021634 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.62s 2025-06-01 04:44:25.021689 | orchestrator | 2025-06-01 04:44:25 | INFO  | Task 54fe67ee-d041-4971-87a9-2c9204a45497 is in state SUCCESS 2025-06-01 04:44:25.021702 | orchestrator | 2025-06-01 04:44:25 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:25.021713 | orchestrator | 2025-06-01 04:44:25 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:25.021724 | orchestrator | 2025-06-01 04:44:25 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:25.021735 | orchestrator | 2025-06-01 04:44:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:28.050221 | orchestrator | 2025-06-01 04:44:28 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:28.051906 | orchestrator | 2025-06-01 04:44:28 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:28.054353 | orchestrator | 2025-06-01 04:44:28 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:28.055942 | orchestrator | 2025-06-01 04:44:28 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:28.058517 | orchestrator | 2025-06-01 04:44:28 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:28.058652 | orchestrator | 2025-06-01 04:44:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:31.097832 | orchestrator | 2025-06-01 04:44:31 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:31.099810 | orchestrator | 2025-06-01 04:44:31 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:31.101406 | orchestrator | 2025-06-01 04:44:31 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:31.104443 | orchestrator | 2025-06-01 04:44:31 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:31.105930 | orchestrator | 2025-06-01 04:44:31 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:31.106122 | orchestrator | 2025-06-01 04:44:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:34.141088 | orchestrator | 2025-06-01 04:44:34 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:34.141468 | orchestrator | 2025-06-01 04:44:34 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:34.143014 | orchestrator | 2025-06-01 04:44:34 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:34.143636 | orchestrator | 2025-06-01 04:44:34 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:34.144765 | orchestrator | 2025-06-01 04:44:34 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:34.144859 | orchestrator | 2025-06-01 04:44:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:37.189251 | orchestrator | 2025-06-01 04:44:37 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:37.190472 | orchestrator | 2025-06-01 04:44:37 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:37.191100 | orchestrator | 2025-06-01 04:44:37 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:37.192413 | orchestrator | 2025-06-01 04:44:37 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:37.195074 | orchestrator | 2025-06-01 04:44:37 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:37.195144 | orchestrator | 2025-06-01 04:44:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:40.226833 | orchestrator | 2025-06-01 04:44:40 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:40.226983 | orchestrator | 2025-06-01 04:44:40 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:40.227661 | orchestrator | 2025-06-01 04:44:40 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:40.228657 | orchestrator | 2025-06-01 04:44:40 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:40.228976 | orchestrator | 2025-06-01 04:44:40 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:40.229112 | orchestrator | 2025-06-01 04:44:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:43.270714 | orchestrator | 2025-06-01 04:44:43 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:43.270814 | orchestrator | 2025-06-01 04:44:43 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:43.271214 | orchestrator | 2025-06-01 04:44:43 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:43.272017 | orchestrator | 2025-06-01 04:44:43 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:43.272675 | orchestrator | 2025-06-01 04:44:43 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:43.272705 | orchestrator | 2025-06-01 04:44:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:46.320205 | orchestrator | 2025-06-01 04:44:46 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:46.321074 | orchestrator | 2025-06-01 04:44:46 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:46.321113 | orchestrator | 2025-06-01 04:44:46 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:46.322901 | orchestrator | 2025-06-01 04:44:46 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:46.325759 | orchestrator | 2025-06-01 04:44:46 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:46.325829 | orchestrator | 2025-06-01 04:44:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:49.356752 | orchestrator | 2025-06-01 04:44:49 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:49.356870 | orchestrator | 2025-06-01 04:44:49 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:49.357501 | orchestrator | 2025-06-01 04:44:49 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:49.360467 | orchestrator | 2025-06-01 04:44:49 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:49.360999 | orchestrator | 2025-06-01 04:44:49 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:49.361034 | orchestrator | 2025-06-01 04:44:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:52.394912 | orchestrator | 2025-06-01 04:44:52 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:52.395021 | orchestrator | 2025-06-01 04:44:52 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:52.395459 | orchestrator | 2025-06-01 04:44:52 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:52.397931 | orchestrator | 2025-06-01 04:44:52 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:52.397971 | orchestrator | 2025-06-01 04:44:52 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:52.397983 | orchestrator | 2025-06-01 04:44:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:55.418126 | orchestrator | 2025-06-01 04:44:55 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:55.421562 | orchestrator | 2025-06-01 04:44:55 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:55.423849 | orchestrator | 2025-06-01 04:44:55 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:55.425324 | orchestrator | 2025-06-01 04:44:55 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:55.426762 | orchestrator | 2025-06-01 04:44:55 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state STARTED 2025-06-01 04:44:55.427003 | orchestrator | 2025-06-01 04:44:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:44:58.454893 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:44:58.455149 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task 8d700b40-18c8-45cd-9b1c-fa24e7d08b74 is in state STARTED 2025-06-01 04:44:58.458411 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:44:58.458910 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:44:58.459660 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:44:58.460287 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task 3df3593c-0794-4b66-818d-e619ea455a35 is in state STARTED 2025-06-01 04:44:58.463853 | orchestrator | 2025-06-01 04:44:58 | INFO  | Task 0e763587-819f-43db-84ca-ab1e2374fd34 is in state SUCCESS 2025-06-01 04:44:58.465316 | orchestrator | 2025-06-01 04:44:58.465346 | orchestrator | 2025-06-01 04:44:58.465356 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-01 04:44:58.465366 | orchestrator | 2025-06-01 04:44:58.465376 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-01 04:44:58.465385 | orchestrator | Sunday 01 June 2025 04:40:57 +0000 (0:00:00.190) 0:00:00.190 *********** 2025-06-01 04:44:58.465394 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:58.465404 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:58.465412 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:58.465421 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.465430 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.465438 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.465447 | orchestrator | 2025-06-01 04:44:58.465456 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-01 04:44:58.465465 | orchestrator | Sunday 01 June 2025 04:40:58 +0000 (0:00:00.815) 0:00:01.005 *********** 2025-06-01 04:44:58.465474 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.465484 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.465492 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.465501 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.465509 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.465567 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.465578 | orchestrator | 2025-06-01 04:44:58.465587 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-01 04:44:58.465596 | orchestrator | Sunday 01 June 2025 04:40:59 +0000 (0:00:00.666) 0:00:01.672 *********** 2025-06-01 04:44:58.465627 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.465636 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.465644 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.465653 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.465662 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.465670 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.465679 | orchestrator | 2025-06-01 04:44:58.465687 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-01 04:44:58.465696 | orchestrator | Sunday 01 June 2025 04:41:00 +0000 (0:00:00.875) 0:00:02.547 *********** 2025-06-01 04:44:58.465705 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:58.465714 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:58.465722 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:58.465731 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.465740 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.465748 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.465757 | orchestrator | 2025-06-01 04:44:58.465766 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-01 04:44:58.465775 | orchestrator | Sunday 01 June 2025 04:41:01 +0000 (0:00:01.855) 0:00:04.402 *********** 2025-06-01 04:44:58.465783 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:58.465792 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:58.465800 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:58.465809 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.465817 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.465826 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.465834 | orchestrator | 2025-06-01 04:44:58.465843 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-01 04:44:58.465852 | orchestrator | Sunday 01 June 2025 04:41:03 +0000 (0:00:01.071) 0:00:05.474 *********** 2025-06-01 04:44:58.465860 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:58.465869 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:58.465877 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:58.465886 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.465895 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.465903 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.465912 | orchestrator | 2025-06-01 04:44:58.465933 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-01 04:44:58.465944 | orchestrator | Sunday 01 June 2025 04:41:04 +0000 (0:00:01.051) 0:00:06.526 *********** 2025-06-01 04:44:58.465954 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.465965 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.465975 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.465984 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.465994 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466004 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466054 | orchestrator | 2025-06-01 04:44:58.466067 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-01 04:44:58.466078 | orchestrator | Sunday 01 June 2025 04:41:05 +0000 (0:00:00.931) 0:00:07.457 *********** 2025-06-01 04:44:58.466089 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466099 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.466108 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.466118 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.466129 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466138 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466148 | orchestrator | 2025-06-01 04:44:58.466158 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-01 04:44:58.466168 | orchestrator | Sunday 01 June 2025 04:41:05 +0000 (0:00:00.823) 0:00:08.281 *********** 2025-06-01 04:44:58.466178 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 04:44:58.466196 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 04:44:58.466206 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466216 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 04:44:58.466226 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 04:44:58.466235 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.466246 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 04:44:58.466256 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 04:44:58.466266 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.466276 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 04:44:58.466297 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 04:44:58.466307 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.466315 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 04:44:58.466324 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 04:44:58.466332 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466341 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 04:44:58.466350 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 04:44:58.466358 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466367 | orchestrator | 2025-06-01 04:44:58.466375 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-01 04:44:58.466384 | orchestrator | Sunday 01 June 2025 04:41:06 +0000 (0:00:00.833) 0:00:09.115 *********** 2025-06-01 04:44:58.466393 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466401 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.466410 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.466419 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.466427 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466436 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466444 | orchestrator | 2025-06-01 04:44:58.466453 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-01 04:44:58.466463 | orchestrator | Sunday 01 June 2025 04:41:07 +0000 (0:00:01.120) 0:00:10.235 *********** 2025-06-01 04:44:58.466472 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:58.466480 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:58.466489 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:58.466497 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.466506 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.466514 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.466541 | orchestrator | 2025-06-01 04:44:58.466550 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-01 04:44:58.466559 | orchestrator | Sunday 01 June 2025 04:41:08 +0000 (0:00:00.726) 0:00:10.961 *********** 2025-06-01 04:44:58.466567 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.466576 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:58.466584 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.466597 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:58.466612 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:58.466626 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.466640 | orchestrator | 2025-06-01 04:44:58.466655 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-01 04:44:58.466670 | orchestrator | Sunday 01 June 2025 04:41:14 +0000 (0:00:05.990) 0:00:16.951 *********** 2025-06-01 04:44:58.466685 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466700 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.466714 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.466727 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.466765 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466774 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466783 | orchestrator | 2025-06-01 04:44:58.466792 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-01 04:44:58.466800 | orchestrator | Sunday 01 June 2025 04:41:15 +0000 (0:00:00.922) 0:00:17.874 *********** 2025-06-01 04:44:58.466809 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466817 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.466826 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.466835 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.466843 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466857 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466866 | orchestrator | 2025-06-01 04:44:58.466875 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-01 04:44:58.466885 | orchestrator | Sunday 01 June 2025 04:41:16 +0000 (0:00:01.490) 0:00:19.364 *********** 2025-06-01 04:44:58.466894 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466903 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.466911 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.466920 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.466928 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.466937 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.466945 | orchestrator | 2025-06-01 04:44:58.466954 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-01 04:44:58.466962 | orchestrator | Sunday 01 June 2025 04:41:17 +0000 (0:00:00.808) 0:00:20.173 *********** 2025-06-01 04:44:58.466971 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-01 04:44:58.466980 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-01 04:44:58.466989 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.466997 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-01 04:44:58.467006 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-01 04:44:58.467015 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.467023 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-01 04:44:58.467032 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-01 04:44:58.467040 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.467049 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-01 04:44:58.467057 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-01 04:44:58.467066 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.467074 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-01 04:44:58.467083 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-01 04:44:58.467091 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467100 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-01 04:44:58.467108 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-01 04:44:58.467117 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467125 | orchestrator | 2025-06-01 04:44:58.467134 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-01 04:44:58.467150 | orchestrator | Sunday 01 June 2025 04:41:18 +0000 (0:00:00.805) 0:00:20.979 *********** 2025-06-01 04:44:58.467159 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.467168 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.467179 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.467193 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.467207 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467221 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467233 | orchestrator | 2025-06-01 04:44:58.467248 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-01 04:44:58.467263 | orchestrator | 2025-06-01 04:44:58.467277 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-01 04:44:58.467300 | orchestrator | Sunday 01 June 2025 04:41:19 +0000 (0:00:01.432) 0:00:22.411 *********** 2025-06-01 04:44:58.467315 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.467329 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.467344 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.467357 | orchestrator | 2025-06-01 04:44:58.467366 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-01 04:44:58.467375 | orchestrator | Sunday 01 June 2025 04:41:21 +0000 (0:00:01.682) 0:00:24.094 *********** 2025-06-01 04:44:58.467383 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.467392 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.467400 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.467409 | orchestrator | 2025-06-01 04:44:58.467418 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-01 04:44:58.467427 | orchestrator | Sunday 01 June 2025 04:41:23 +0000 (0:00:01.604) 0:00:25.699 *********** 2025-06-01 04:44:58.467435 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.467443 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.467452 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.467460 | orchestrator | 2025-06-01 04:44:58.467469 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-01 04:44:58.467478 | orchestrator | Sunday 01 June 2025 04:41:24 +0000 (0:00:01.165) 0:00:26.864 *********** 2025-06-01 04:44:58.467486 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.467495 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.467504 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.467512 | orchestrator | 2025-06-01 04:44:58.467563 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-01 04:44:58.467573 | orchestrator | Sunday 01 June 2025 04:41:25 +0000 (0:00:00.856) 0:00:27.720 *********** 2025-06-01 04:44:58.467582 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.467590 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467599 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467607 | orchestrator | 2025-06-01 04:44:58.467616 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-01 04:44:58.467625 | orchestrator | Sunday 01 June 2025 04:41:25 +0000 (0:00:00.418) 0:00:28.138 *********** 2025-06-01 04:44:58.467633 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:44:58.467642 | orchestrator | 2025-06-01 04:44:58.467651 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-01 04:44:58.467660 | orchestrator | Sunday 01 June 2025 04:41:26 +0000 (0:00:00.704) 0:00:28.843 *********** 2025-06-01 04:44:58.467668 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.467677 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.467685 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.467694 | orchestrator | 2025-06-01 04:44:58.467703 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-01 04:44:58.467711 | orchestrator | Sunday 01 June 2025 04:41:28 +0000 (0:00:01.849) 0:00:30.692 *********** 2025-06-01 04:44:58.467720 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467734 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467743 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.467751 | orchestrator | 2025-06-01 04:44:58.467760 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-01 04:44:58.467769 | orchestrator | Sunday 01 June 2025 04:41:29 +0000 (0:00:01.085) 0:00:31.778 *********** 2025-06-01 04:44:58.467778 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467786 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467795 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.467803 | orchestrator | 2025-06-01 04:44:58.467812 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-01 04:44:58.467820 | orchestrator | Sunday 01 June 2025 04:41:30 +0000 (0:00:00.859) 0:00:32.637 *********** 2025-06-01 04:44:58.467838 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467847 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467855 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.467864 | orchestrator | 2025-06-01 04:44:58.467873 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-01 04:44:58.467881 | orchestrator | Sunday 01 June 2025 04:41:31 +0000 (0:00:01.730) 0:00:34.368 *********** 2025-06-01 04:44:58.467890 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.467899 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467907 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467916 | orchestrator | 2025-06-01 04:44:58.467925 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-01 04:44:58.467933 | orchestrator | Sunday 01 June 2025 04:41:32 +0000 (0:00:00.246) 0:00:34.614 *********** 2025-06-01 04:44:58.467942 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.467951 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.467959 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.467968 | orchestrator | 2025-06-01 04:44:58.467977 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-01 04:44:58.467985 | orchestrator | Sunday 01 June 2025 04:41:32 +0000 (0:00:00.265) 0:00:34.879 *********** 2025-06-01 04:44:58.467994 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468003 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468011 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.468020 | orchestrator | 2025-06-01 04:44:58.468029 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-01 04:44:58.468038 | orchestrator | Sunday 01 June 2025 04:41:34 +0000 (0:00:01.754) 0:00:36.634 *********** 2025-06-01 04:44:58.468053 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 04:44:58.468064 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 04:44:58.468072 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 04:44:58.468081 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 04:44:58.468090 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 04:44:58.468099 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 04:44:58.468107 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 04:44:58.468116 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 04:44:58.468125 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 04:44:58.468134 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 04:44:58.468145 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 04:44:58.468160 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 04:44:58.468184 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 04:44:58.468198 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 04:44:58.468225 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 04:44:58.468240 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.468253 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.468268 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.468281 | orchestrator | 2025-06-01 04:44:58.468297 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-01 04:44:58.468312 | orchestrator | Sunday 01 June 2025 04:42:30 +0000 (0:00:56.406) 0:01:33.040 *********** 2025-06-01 04:44:58.468326 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.468344 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.468354 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.468362 | orchestrator | 2025-06-01 04:44:58.468371 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-01 04:44:58.468380 | orchestrator | Sunday 01 June 2025 04:42:30 +0000 (0:00:00.309) 0:01:33.350 *********** 2025-06-01 04:44:58.468388 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468397 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468406 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.468414 | orchestrator | 2025-06-01 04:44:58.468423 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-01 04:44:58.468431 | orchestrator | Sunday 01 June 2025 04:42:31 +0000 (0:00:00.991) 0:01:34.342 *********** 2025-06-01 04:44:58.468440 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468449 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468457 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.468466 | orchestrator | 2025-06-01 04:44:58.468474 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-01 04:44:58.468483 | orchestrator | Sunday 01 June 2025 04:42:33 +0000 (0:00:01.149) 0:01:35.491 *********** 2025-06-01 04:44:58.468491 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468500 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468508 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.468572 | orchestrator | 2025-06-01 04:44:58.468583 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-01 04:44:58.468592 | orchestrator | Sunday 01 June 2025 04:42:47 +0000 (0:00:14.933) 0:01:50.425 *********** 2025-06-01 04:44:58.468601 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.468610 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.468618 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.468627 | orchestrator | 2025-06-01 04:44:58.468636 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-01 04:44:58.468644 | orchestrator | Sunday 01 June 2025 04:42:48 +0000 (0:00:00.642) 0:01:51.067 *********** 2025-06-01 04:44:58.468653 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.468661 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.468669 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.468677 | orchestrator | 2025-06-01 04:44:58.468685 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-01 04:44:58.468693 | orchestrator | Sunday 01 June 2025 04:42:49 +0000 (0:00:00.594) 0:01:51.662 *********** 2025-06-01 04:44:58.468701 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468709 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468716 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.468724 | orchestrator | 2025-06-01 04:44:58.468740 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-01 04:44:58.468748 | orchestrator | Sunday 01 June 2025 04:42:49 +0000 (0:00:00.587) 0:01:52.249 *********** 2025-06-01 04:44:58.468762 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.468776 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.468795 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.468811 | orchestrator | 2025-06-01 04:44:58.468824 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-01 04:44:58.468847 | orchestrator | Sunday 01 June 2025 04:42:50 +0000 (0:00:00.837) 0:01:53.087 *********** 2025-06-01 04:44:58.468860 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.468874 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.468887 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.468902 | orchestrator | 2025-06-01 04:44:58.468913 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-01 04:44:58.468922 | orchestrator | Sunday 01 June 2025 04:42:50 +0000 (0:00:00.293) 0:01:53.380 *********** 2025-06-01 04:44:58.468930 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468938 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468945 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.468953 | orchestrator | 2025-06-01 04:44:58.468961 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-01 04:44:58.468969 | orchestrator | Sunday 01 June 2025 04:42:51 +0000 (0:00:00.597) 0:01:53.977 *********** 2025-06-01 04:44:58.468977 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.468984 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.468992 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.469000 | orchestrator | 2025-06-01 04:44:58.469008 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-01 04:44:58.469015 | orchestrator | Sunday 01 June 2025 04:42:52 +0000 (0:00:00.576) 0:01:54.554 *********** 2025-06-01 04:44:58.469023 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.469031 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.469039 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.469047 | orchestrator | 2025-06-01 04:44:58.469054 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-01 04:44:58.469062 | orchestrator | Sunday 01 June 2025 04:42:53 +0000 (0:00:01.023) 0:01:55.578 *********** 2025-06-01 04:44:58.469070 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:44:58.469078 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:44:58.469085 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:44:58.469093 | orchestrator | 2025-06-01 04:44:58.469101 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-01 04:44:58.469109 | orchestrator | Sunday 01 June 2025 04:42:53 +0000 (0:00:00.797) 0:01:56.375 *********** 2025-06-01 04:44:58.469117 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.469125 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.469132 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.469140 | orchestrator | 2025-06-01 04:44:58.469148 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-01 04:44:58.469156 | orchestrator | Sunday 01 June 2025 04:42:54 +0000 (0:00:00.256) 0:01:56.631 *********** 2025-06-01 04:44:58.469164 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.469171 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.469179 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.469187 | orchestrator | 2025-06-01 04:44:58.469195 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-01 04:44:58.469202 | orchestrator | Sunday 01 June 2025 04:42:54 +0000 (0:00:00.274) 0:01:56.906 *********** 2025-06-01 04:44:58.469210 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.469218 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.469231 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.469239 | orchestrator | 2025-06-01 04:44:58.469247 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-01 04:44:58.469255 | orchestrator | Sunday 01 June 2025 04:42:55 +0000 (0:00:00.847) 0:01:57.754 *********** 2025-06-01 04:44:58.469263 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.469270 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.469278 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.469286 | orchestrator | 2025-06-01 04:44:58.469294 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-01 04:44:58.469308 | orchestrator | Sunday 01 June 2025 04:42:55 +0000 (0:00:00.626) 0:01:58.381 *********** 2025-06-01 04:44:58.469316 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 04:44:58.469324 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 04:44:58.469331 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 04:44:58.469339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 04:44:58.469347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 04:44:58.469355 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 04:44:58.469363 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 04:44:58.469371 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 04:44:58.469379 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 04:44:58.469386 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-01 04:44:58.469394 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 04:44:58.469402 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 04:44:58.469418 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-01 04:44:58.469426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 04:44:58.469434 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 04:44:58.469441 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 04:44:58.469449 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 04:44:58.469457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 04:44:58.469465 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 04:44:58.469473 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 04:44:58.469481 | orchestrator | 2025-06-01 04:44:58.469489 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-01 04:44:58.469496 | orchestrator | 2025-06-01 04:44:58.469504 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-01 04:44:58.469512 | orchestrator | Sunday 01 June 2025 04:42:58 +0000 (0:00:02.998) 0:02:01.380 *********** 2025-06-01 04:44:58.469542 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:58.469554 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:58.469562 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:58.469570 | orchestrator | 2025-06-01 04:44:58.469577 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-01 04:44:58.469585 | orchestrator | Sunday 01 June 2025 04:42:59 +0000 (0:00:00.531) 0:02:01.911 *********** 2025-06-01 04:44:58.469593 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:58.469601 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:58.469608 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:58.469616 | orchestrator | 2025-06-01 04:44:58.469624 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-01 04:44:58.469631 | orchestrator | Sunday 01 June 2025 04:43:00 +0000 (0:00:00.602) 0:02:02.513 *********** 2025-06-01 04:44:58.469639 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:58.469647 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:58.469660 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:58.469667 | orchestrator | 2025-06-01 04:44:58.469675 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-01 04:44:58.469683 | orchestrator | Sunday 01 June 2025 04:43:00 +0000 (0:00:00.325) 0:02:02.839 *********** 2025-06-01 04:44:58.469691 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:44:58.469699 | orchestrator | 2025-06-01 04:44:58.469707 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-01 04:44:58.469714 | orchestrator | Sunday 01 June 2025 04:43:00 +0000 (0:00:00.608) 0:02:03.447 *********** 2025-06-01 04:44:58.469722 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.469730 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.469738 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.469745 | orchestrator | 2025-06-01 04:44:58.469753 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-01 04:44:58.469761 | orchestrator | Sunday 01 June 2025 04:43:01 +0000 (0:00:00.297) 0:02:03.745 *********** 2025-06-01 04:44:58.469777 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.469785 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.469793 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.469800 | orchestrator | 2025-06-01 04:44:58.469808 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-01 04:44:58.469816 | orchestrator | Sunday 01 June 2025 04:43:01 +0000 (0:00:00.276) 0:02:04.021 *********** 2025-06-01 04:44:58.469824 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.469832 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.469839 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.469847 | orchestrator | 2025-06-01 04:44:58.469855 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-01 04:44:58.469863 | orchestrator | Sunday 01 June 2025 04:43:01 +0000 (0:00:00.332) 0:02:04.354 *********** 2025-06-01 04:44:58.469870 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:58.469878 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:58.469886 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:58.469893 | orchestrator | 2025-06-01 04:44:58.469901 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-01 04:44:58.469909 | orchestrator | Sunday 01 June 2025 04:43:03 +0000 (0:00:01.416) 0:02:05.771 *********** 2025-06-01 04:44:58.469920 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:44:58.469934 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:44:58.469947 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:44:58.469959 | orchestrator | 2025-06-01 04:44:58.469972 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-01 04:44:58.469984 | orchestrator | 2025-06-01 04:44:58.469996 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-01 04:44:58.470009 | orchestrator | Sunday 01 June 2025 04:43:11 +0000 (0:00:08.592) 0:02:14.364 *********** 2025-06-01 04:44:58.470054 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.470066 | orchestrator | 2025-06-01 04:44:58.470078 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-01 04:44:58.470090 | orchestrator | Sunday 01 June 2025 04:43:12 +0000 (0:00:00.723) 0:02:15.088 *********** 2025-06-01 04:44:58.470103 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470115 | orchestrator | 2025-06-01 04:44:58.470129 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 04:44:58.470142 | orchestrator | Sunday 01 June 2025 04:43:13 +0000 (0:00:00.407) 0:02:15.495 *********** 2025-06-01 04:44:58.470157 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 04:44:58.470169 | orchestrator | 2025-06-01 04:44:58.470193 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 04:44:58.470207 | orchestrator | Sunday 01 June 2025 04:43:13 +0000 (0:00:00.928) 0:02:16.424 *********** 2025-06-01 04:44:58.470221 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470246 | orchestrator | 2025-06-01 04:44:58.470255 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-01 04:44:58.470263 | orchestrator | Sunday 01 June 2025 04:43:14 +0000 (0:00:00.873) 0:02:17.297 *********** 2025-06-01 04:44:58.470270 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470278 | orchestrator | 2025-06-01 04:44:58.470286 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-01 04:44:58.470294 | orchestrator | Sunday 01 June 2025 04:43:15 +0000 (0:00:00.687) 0:02:17.985 *********** 2025-06-01 04:44:58.470301 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 04:44:58.470309 | orchestrator | 2025-06-01 04:44:58.470317 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-01 04:44:58.470325 | orchestrator | Sunday 01 June 2025 04:43:17 +0000 (0:00:01.741) 0:02:19.726 *********** 2025-06-01 04:44:58.470332 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 04:44:58.470340 | orchestrator | 2025-06-01 04:44:58.470348 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-01 04:44:58.470356 | orchestrator | Sunday 01 June 2025 04:43:18 +0000 (0:00:00.841) 0:02:20.568 *********** 2025-06-01 04:44:58.470364 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470371 | orchestrator | 2025-06-01 04:44:58.470379 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-01 04:44:58.470387 | orchestrator | Sunday 01 June 2025 04:43:18 +0000 (0:00:00.418) 0:02:20.987 *********** 2025-06-01 04:44:58.470395 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470402 | orchestrator | 2025-06-01 04:44:58.470410 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-01 04:44:58.470418 | orchestrator | 2025-06-01 04:44:58.470425 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-01 04:44:58.470433 | orchestrator | Sunday 01 June 2025 04:43:18 +0000 (0:00:00.446) 0:02:21.434 *********** 2025-06-01 04:44:58.470441 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.470449 | orchestrator | 2025-06-01 04:44:58.470457 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-01 04:44:58.470464 | orchestrator | Sunday 01 June 2025 04:43:19 +0000 (0:00:00.177) 0:02:21.611 *********** 2025-06-01 04:44:58.470472 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 04:44:58.470480 | orchestrator | 2025-06-01 04:44:58.470488 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-01 04:44:58.470496 | orchestrator | Sunday 01 June 2025 04:43:19 +0000 (0:00:00.188) 0:02:21.800 *********** 2025-06-01 04:44:58.470503 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.470511 | orchestrator | 2025-06-01 04:44:58.470540 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-01 04:44:58.470555 | orchestrator | Sunday 01 June 2025 04:43:20 +0000 (0:00:01.088) 0:02:22.888 *********** 2025-06-01 04:44:58.470564 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.470571 | orchestrator | 2025-06-01 04:44:58.470579 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-01 04:44:58.470587 | orchestrator | Sunday 01 June 2025 04:43:21 +0000 (0:00:01.217) 0:02:24.106 *********** 2025-06-01 04:44:58.470595 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470602 | orchestrator | 2025-06-01 04:44:58.470610 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-01 04:44:58.470623 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.646) 0:02:24.752 *********** 2025-06-01 04:44:58.470631 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.470639 | orchestrator | 2025-06-01 04:44:58.470647 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-01 04:44:58.470654 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.331) 0:02:25.084 *********** 2025-06-01 04:44:58.470662 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470670 | orchestrator | 2025-06-01 04:44:58.470678 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-01 04:44:58.470691 | orchestrator | Sunday 01 June 2025 04:43:27 +0000 (0:00:04.995) 0:02:30.080 *********** 2025-06-01 04:44:58.470699 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.470706 | orchestrator | 2025-06-01 04:44:58.470714 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-01 04:44:58.470722 | orchestrator | Sunday 01 June 2025 04:43:38 +0000 (0:00:10.436) 0:02:40.516 *********** 2025-06-01 04:44:58.470729 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.470737 | orchestrator | 2025-06-01 04:44:58.470745 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-01 04:44:58.470753 | orchestrator | 2025-06-01 04:44:58.470761 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-01 04:44:58.470768 | orchestrator | Sunday 01 June 2025 04:43:38 +0000 (0:00:00.483) 0:02:40.999 *********** 2025-06-01 04:44:58.470776 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.470784 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.470792 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.470799 | orchestrator | 2025-06-01 04:44:58.470807 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-01 04:44:58.470815 | orchestrator | Sunday 01 June 2025 04:43:39 +0000 (0:00:00.561) 0:02:41.560 *********** 2025-06-01 04:44:58.470823 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.470831 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.470839 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.470846 | orchestrator | 2025-06-01 04:44:58.470854 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-01 04:44:58.470862 | orchestrator | Sunday 01 June 2025 04:43:39 +0000 (0:00:00.326) 0:02:41.887 *********** 2025-06-01 04:44:58.470870 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:44:58.470878 | orchestrator | 2025-06-01 04:44:58.470886 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-01 04:44:58.470899 | orchestrator | Sunday 01 June 2025 04:43:39 +0000 (0:00:00.544) 0:02:42.431 *********** 2025-06-01 04:44:58.470907 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.470915 | orchestrator | 2025-06-01 04:44:58.470923 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-01 04:44:58.470930 | orchestrator | Sunday 01 June 2025 04:43:41 +0000 (0:00:01.063) 0:02:43.495 *********** 2025-06-01 04:44:58.470938 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.470946 | orchestrator | 2025-06-01 04:44:58.470954 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-01 04:44:58.470962 | orchestrator | Sunday 01 June 2025 04:43:41 +0000 (0:00:00.788) 0:02:44.283 *********** 2025-06-01 04:44:58.470970 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.470977 | orchestrator | 2025-06-01 04:44:58.470985 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-01 04:44:58.470993 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:00.442) 0:02:44.726 *********** 2025-06-01 04:44:58.471001 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.471009 | orchestrator | 2025-06-01 04:44:58.471017 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-01 04:44:58.471025 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.943) 0:02:45.670 *********** 2025-06-01 04:44:58.471032 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.471040 | orchestrator | 2025-06-01 04:44:58.471048 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-01 04:44:58.471056 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.179) 0:02:45.850 *********** 2025-06-01 04:44:58.471064 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.471072 | orchestrator | 2025-06-01 04:44:58.471080 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-01 04:44:58.471093 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.203) 0:02:46.053 *********** 2025-06-01 04:44:58.471101 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.471109 | orchestrator | 2025-06-01 04:44:58.471117 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-01 04:44:58.471125 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.190) 0:02:46.243 *********** 2025-06-01 04:44:58.471132 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.471140 | orchestrator | 2025-06-01 04:44:58.471148 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-01 04:44:58.471156 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.205) 0:02:46.449 *********** 2025-06-01 04:44:58.471164 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.471171 | orchestrator | 2025-06-01 04:44:58.471179 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-01 04:44:58.471187 | orchestrator | Sunday 01 June 2025 04:43:48 +0000 (0:00:04.157) 0:02:50.606 *********** 2025-06-01 04:44:58.471195 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-01 04:44:58.471203 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-01 04:44:58.471211 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-01 04:44:58.471219 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-01 04:44:58.471226 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-01 04:44:58.471234 | orchestrator | 2025-06-01 04:44:58.471242 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-01 04:44:58.471250 | orchestrator | Sunday 01 June 2025 04:44:30 +0000 (0:00:42.406) 0:03:33.013 *********** 2025-06-01 04:44:58.471258 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.471266 | orchestrator | 2025-06-01 04:44:58.471274 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-01 04:44:58.471282 | orchestrator | Sunday 01 June 2025 04:44:31 +0000 (0:00:01.080) 0:03:34.093 *********** 2025-06-01 04:44:58.471289 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.471297 | orchestrator | 2025-06-01 04:44:58.471305 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-01 04:44:58.471313 | orchestrator | Sunday 01 June 2025 04:44:32 +0000 (0:00:01.328) 0:03:35.422 *********** 2025-06-01 04:44:58.471320 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 04:44:58.471328 | orchestrator | 2025-06-01 04:44:58.471336 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-01 04:44:58.471344 | orchestrator | Sunday 01 June 2025 04:44:34 +0000 (0:00:01.051) 0:03:36.474 *********** 2025-06-01 04:44:58.471351 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.471359 | orchestrator | 2025-06-01 04:44:58.471367 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-01 04:44:58.471375 | orchestrator | Sunday 01 June 2025 04:44:34 +0000 (0:00:00.185) 0:03:36.659 *********** 2025-06-01 04:44:58.471383 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-01 04:44:58.471391 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-01 04:44:58.471398 | orchestrator | 2025-06-01 04:44:58.471406 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-01 04:44:58.472032 | orchestrator | Sunday 01 June 2025 04:44:36 +0000 (0:00:02.365) 0:03:39.024 *********** 2025-06-01 04:44:58.472064 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.472079 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.472090 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.472101 | orchestrator | 2025-06-01 04:44:58.472112 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-01 04:44:58.472122 | orchestrator | Sunday 01 June 2025 04:44:37 +0000 (0:00:00.619) 0:03:39.644 *********** 2025-06-01 04:44:58.472138 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.472144 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.472151 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.472158 | orchestrator | 2025-06-01 04:44:58.472176 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-01 04:44:58.472187 | orchestrator | 2025-06-01 04:44:58.472197 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-01 04:44:58.472208 | orchestrator | Sunday 01 June 2025 04:44:38 +0000 (0:00:00.906) 0:03:40.551 *********** 2025-06-01 04:44:58.472218 | orchestrator | ok: [testbed-manager] 2025-06-01 04:44:58.472229 | orchestrator | 2025-06-01 04:44:58.472239 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-01 04:44:58.472252 | orchestrator | Sunday 01 June 2025 04:44:38 +0000 (0:00:00.115) 0:03:40.666 *********** 2025-06-01 04:44:58.472264 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 04:44:58.472275 | orchestrator | 2025-06-01 04:44:58.472286 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-01 04:44:58.472297 | orchestrator | Sunday 01 June 2025 04:44:38 +0000 (0:00:00.290) 0:03:40.957 *********** 2025-06-01 04:44:58.472308 | orchestrator | changed: [testbed-manager] 2025-06-01 04:44:58.472317 | orchestrator | 2025-06-01 04:44:58.472324 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-01 04:44:58.472331 | orchestrator | 2025-06-01 04:44:58.472337 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-01 04:44:58.472344 | orchestrator | Sunday 01 June 2025 04:44:43 +0000 (0:00:05.179) 0:03:46.137 *********** 2025-06-01 04:44:58.472351 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:44:58.472357 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:44:58.472364 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:44:58.472370 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:44:58.472377 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:44:58.472383 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:44:58.472390 | orchestrator | 2025-06-01 04:44:58.472396 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-01 04:44:58.472403 | orchestrator | Sunday 01 June 2025 04:44:44 +0000 (0:00:00.541) 0:03:46.678 *********** 2025-06-01 04:44:58.472410 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 04:44:58.472417 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 04:44:58.472423 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 04:44:58.472430 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 04:44:58.472436 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 04:44:58.472443 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 04:44:58.472454 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 04:44:58.472461 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 04:44:58.472467 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 04:44:58.472474 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 04:44:58.472480 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 04:44:58.472487 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 04:44:58.472493 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 04:44:58.472500 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 04:44:58.472506 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 04:44:58.472544 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 04:44:58.472551 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 04:44:58.472558 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 04:44:58.472564 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 04:44:58.472571 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 04:44:58.472577 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 04:44:58.472584 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 04:44:58.472590 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 04:44:58.472597 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 04:44:58.472603 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 04:44:58.472610 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 04:44:58.472616 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 04:44:58.472623 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 04:44:58.472630 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 04:44:58.472636 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 04:44:58.472643 | orchestrator | 2025-06-01 04:44:58.472655 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-01 04:44:58.472662 | orchestrator | Sunday 01 June 2025 04:44:55 +0000 (0:00:11.160) 0:03:57.839 *********** 2025-06-01 04:44:58.472668 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.472675 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.472682 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.472688 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.472695 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.472701 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.472708 | orchestrator | 2025-06-01 04:44:58.472715 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-01 04:44:58.472721 | orchestrator | Sunday 01 June 2025 04:44:55 +0000 (0:00:00.489) 0:03:58.328 *********** 2025-06-01 04:44:58.472728 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:44:58.472734 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:44:58.472741 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:44:58.472747 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:44:58.472754 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:44:58.472761 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:44:58.472767 | orchestrator | 2025-06-01 04:44:58.472774 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:44:58.472781 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:44:58.472789 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-01 04:44:58.472800 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-01 04:44:58.472812 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-01 04:44:58.472831 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 04:44:58.472842 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 04:44:58.472858 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 04:44:58.472865 | orchestrator | 2025-06-01 04:44:58.472872 | orchestrator | 2025-06-01 04:44:58.472879 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:44:58.472885 | orchestrator | Sunday 01 June 2025 04:44:56 +0000 (0:00:00.546) 0:03:58.875 *********** 2025-06-01 04:44:58.472892 | orchestrator | =============================================================================== 2025-06-01 04:44:58.472898 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.41s 2025-06-01 04:44:58.472906 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.41s 2025-06-01 04:44:58.472912 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.93s 2025-06-01 04:44:58.472919 | orchestrator | Manage labels ---------------------------------------------------------- 11.16s 2025-06-01 04:44:58.472925 | orchestrator | kubectl : Install required packages ------------------------------------ 10.44s 2025-06-01 04:44:58.472932 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.59s 2025-06-01 04:44:58.472938 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.99s 2025-06-01 04:44:58.472945 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.18s 2025-06-01 04:44:58.472951 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.00s 2025-06-01 04:44:58.472958 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.16s 2025-06-01 04:44:58.472965 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2025-06-01 04:44:58.472971 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.37s 2025-06-01 04:44:58.472978 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.86s 2025-06-01 04:44:58.472984 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.85s 2025-06-01 04:44:58.472991 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.75s 2025-06-01 04:44:58.472997 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.74s 2025-06-01 04:44:58.473004 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.73s 2025-06-01 04:44:58.473010 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.68s 2025-06-01 04:44:58.473017 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.60s 2025-06-01 04:44:58.473024 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.49s 2025-06-01 04:44:58.473030 | orchestrator | 2025-06-01 04:44:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:01.496925 | orchestrator | 2025-06-01 04:45:01 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:01.497036 | orchestrator | 2025-06-01 04:45:01 | INFO  | Task 8d700b40-18c8-45cd-9b1c-fa24e7d08b74 is in state STARTED 2025-06-01 04:45:01.499248 | orchestrator | 2025-06-01 04:45:01 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:01.500060 | orchestrator | 2025-06-01 04:45:01 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:01.500792 | orchestrator | 2025-06-01 04:45:01 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:01.501765 | orchestrator | 2025-06-01 04:45:01 | INFO  | Task 3df3593c-0794-4b66-818d-e619ea455a35 is in state STARTED 2025-06-01 04:45:01.501788 | orchestrator | 2025-06-01 04:45:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:04.540062 | orchestrator | 2025-06-01 04:45:04 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:04.541771 | orchestrator | 2025-06-01 04:45:04 | INFO  | Task 8d700b40-18c8-45cd-9b1c-fa24e7d08b74 is in state STARTED 2025-06-01 04:45:04.542422 | orchestrator | 2025-06-01 04:45:04 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:04.544122 | orchestrator | 2025-06-01 04:45:04 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:04.545149 | orchestrator | 2025-06-01 04:45:04 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:04.546437 | orchestrator | 2025-06-01 04:45:04 | INFO  | Task 3df3593c-0794-4b66-818d-e619ea455a35 is in state SUCCESS 2025-06-01 04:45:04.546492 | orchestrator | 2025-06-01 04:45:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:07.591207 | orchestrator | 2025-06-01 04:45:07 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:07.591830 | orchestrator | 2025-06-01 04:45:07 | INFO  | Task 8d700b40-18c8-45cd-9b1c-fa24e7d08b74 is in state SUCCESS 2025-06-01 04:45:07.593203 | orchestrator | 2025-06-01 04:45:07 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:07.594891 | orchestrator | 2025-06-01 04:45:07 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:07.596761 | orchestrator | 2025-06-01 04:45:07 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:07.597010 | orchestrator | 2025-06-01 04:45:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:10.643762 | orchestrator | 2025-06-01 04:45:10 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:10.646601 | orchestrator | 2025-06-01 04:45:10 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:10.648699 | orchestrator | 2025-06-01 04:45:10 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:10.650868 | orchestrator | 2025-06-01 04:45:10 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:10.650956 | orchestrator | 2025-06-01 04:45:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:13.693939 | orchestrator | 2025-06-01 04:45:13 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:13.694370 | orchestrator | 2025-06-01 04:45:13 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:13.695854 | orchestrator | 2025-06-01 04:45:13 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:13.697490 | orchestrator | 2025-06-01 04:45:13 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:13.697512 | orchestrator | 2025-06-01 04:45:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:16.747095 | orchestrator | 2025-06-01 04:45:16 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:16.748538 | orchestrator | 2025-06-01 04:45:16 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:16.749875 | orchestrator | 2025-06-01 04:45:16 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:16.751124 | orchestrator | 2025-06-01 04:45:16 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:16.751179 | orchestrator | 2025-06-01 04:45:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:19.806227 | orchestrator | 2025-06-01 04:45:19 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:19.806739 | orchestrator | 2025-06-01 04:45:19 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:19.810223 | orchestrator | 2025-06-01 04:45:19 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:19.811173 | orchestrator | 2025-06-01 04:45:19 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:19.811450 | orchestrator | 2025-06-01 04:45:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:22.869942 | orchestrator | 2025-06-01 04:45:22 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:22.871171 | orchestrator | 2025-06-01 04:45:22 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:22.872721 | orchestrator | 2025-06-01 04:45:22 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:22.876102 | orchestrator | 2025-06-01 04:45:22 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:22.876545 | orchestrator | 2025-06-01 04:45:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:25.940742 | orchestrator | 2025-06-01 04:45:25 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:25.940871 | orchestrator | 2025-06-01 04:45:25 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:25.942480 | orchestrator | 2025-06-01 04:45:25 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:25.944031 | orchestrator | 2025-06-01 04:45:25 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:25.944318 | orchestrator | 2025-06-01 04:45:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:28.985080 | orchestrator | 2025-06-01 04:45:28 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:28.987468 | orchestrator | 2025-06-01 04:45:28 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:28.989207 | orchestrator | 2025-06-01 04:45:28 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:28.991346 | orchestrator | 2025-06-01 04:45:28 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:28.991424 | orchestrator | 2025-06-01 04:45:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:32.035312 | orchestrator | 2025-06-01 04:45:32 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:32.037644 | orchestrator | 2025-06-01 04:45:32 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:32.040945 | orchestrator | 2025-06-01 04:45:32 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:32.043666 | orchestrator | 2025-06-01 04:45:32 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:32.044204 | orchestrator | 2025-06-01 04:45:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:35.086935 | orchestrator | 2025-06-01 04:45:35 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:35.087153 | orchestrator | 2025-06-01 04:45:35 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:35.087875 | orchestrator | 2025-06-01 04:45:35 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:35.088926 | orchestrator | 2025-06-01 04:45:35 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:35.088956 | orchestrator | 2025-06-01 04:45:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:38.134821 | orchestrator | 2025-06-01 04:45:38 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:38.136655 | orchestrator | 2025-06-01 04:45:38 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:38.138181 | orchestrator | 2025-06-01 04:45:38 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:38.140130 | orchestrator | 2025-06-01 04:45:38 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:38.140194 | orchestrator | 2025-06-01 04:45:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:41.196746 | orchestrator | 2025-06-01 04:45:41 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:41.198189 | orchestrator | 2025-06-01 04:45:41 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:41.199791 | orchestrator | 2025-06-01 04:45:41 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:41.200900 | orchestrator | 2025-06-01 04:45:41 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:41.201292 | orchestrator | 2025-06-01 04:45:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:44.241798 | orchestrator | 2025-06-01 04:45:44 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:44.242173 | orchestrator | 2025-06-01 04:45:44 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:44.243022 | orchestrator | 2025-06-01 04:45:44 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:44.245783 | orchestrator | 2025-06-01 04:45:44 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:44.245826 | orchestrator | 2025-06-01 04:45:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:47.294290 | orchestrator | 2025-06-01 04:45:47 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:47.294811 | orchestrator | 2025-06-01 04:45:47 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:47.295893 | orchestrator | 2025-06-01 04:45:47 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:47.302151 | orchestrator | 2025-06-01 04:45:47 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:47.302223 | orchestrator | 2025-06-01 04:45:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:50.342287 | orchestrator | 2025-06-01 04:45:50 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:50.343003 | orchestrator | 2025-06-01 04:45:50 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state STARTED 2025-06-01 04:45:50.347220 | orchestrator | 2025-06-01 04:45:50 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:50.350913 | orchestrator | 2025-06-01 04:45:50 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:50.350951 | orchestrator | 2025-06-01 04:45:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:53.392771 | orchestrator | 2025-06-01 04:45:53 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:53.399585 | orchestrator | 2025-06-01 04:45:53.399649 | orchestrator | 2025-06-01 04:45:53.399663 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-01 04:45:53.399676 | orchestrator | 2025-06-01 04:45:53.399687 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 04:45:53.399698 | orchestrator | Sunday 01 June 2025 04:45:00 +0000 (0:00:00.124) 0:00:00.124 *********** 2025-06-01 04:45:53.399710 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 04:45:53.399721 | orchestrator | 2025-06-01 04:45:53.399732 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 04:45:53.399743 | orchestrator | Sunday 01 June 2025 04:45:00 +0000 (0:00:00.645) 0:00:00.769 *********** 2025-06-01 04:45:53.399754 | orchestrator | changed: [testbed-manager] 2025-06-01 04:45:53.399765 | orchestrator | 2025-06-01 04:45:53.399776 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-01 04:45:53.399787 | orchestrator | Sunday 01 June 2025 04:45:02 +0000 (0:00:01.044) 0:00:01.814 *********** 2025-06-01 04:45:53.399798 | orchestrator | changed: [testbed-manager] 2025-06-01 04:45:53.399809 | orchestrator | 2025-06-01 04:45:53.399819 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:45:53.399831 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:45:53.399843 | orchestrator | 2025-06-01 04:45:53.399854 | orchestrator | 2025-06-01 04:45:53.399865 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:45:53.399876 | orchestrator | Sunday 01 June 2025 04:45:02 +0000 (0:00:00.415) 0:00:02.230 *********** 2025-06-01 04:45:53.399887 | orchestrator | =============================================================================== 2025-06-01 04:45:53.399898 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.04s 2025-06-01 04:45:53.399908 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2025-06-01 04:45:53.399919 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2025-06-01 04:45:53.399930 | orchestrator | 2025-06-01 04:45:53.399940 | orchestrator | 2025-06-01 04:45:53.399951 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-01 04:45:53.399962 | orchestrator | 2025-06-01 04:45:53.399973 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-01 04:45:53.399983 | orchestrator | Sunday 01 June 2025 04:45:00 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-01 04:45:53.399994 | orchestrator | ok: [testbed-manager] 2025-06-01 04:45:53.400005 | orchestrator | 2025-06-01 04:45:53.400016 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-01 04:45:53.400027 | orchestrator | Sunday 01 June 2025 04:45:00 +0000 (0:00:00.486) 0:00:00.630 *********** 2025-06-01 04:45:53.400037 | orchestrator | ok: [testbed-manager] 2025-06-01 04:45:53.400048 | orchestrator | 2025-06-01 04:45:53.400059 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 04:45:53.400069 | orchestrator | Sunday 01 June 2025 04:45:01 +0000 (0:00:00.472) 0:00:01.103 *********** 2025-06-01 04:45:53.400080 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 04:45:53.400091 | orchestrator | 2025-06-01 04:45:53.400102 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 04:45:53.400112 | orchestrator | Sunday 01 June 2025 04:45:01 +0000 (0:00:00.591) 0:00:01.694 *********** 2025-06-01 04:45:53.400123 | orchestrator | changed: [testbed-manager] 2025-06-01 04:45:53.400134 | orchestrator | 2025-06-01 04:45:53.400145 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-01 04:45:53.400155 | orchestrator | Sunday 01 June 2025 04:45:02 +0000 (0:00:00.969) 0:00:02.664 *********** 2025-06-01 04:45:53.400166 | orchestrator | changed: [testbed-manager] 2025-06-01 04:45:53.400177 | orchestrator | 2025-06-01 04:45:53.400205 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-01 04:45:53.400216 | orchestrator | Sunday 01 June 2025 04:45:03 +0000 (0:00:00.499) 0:00:03.163 *********** 2025-06-01 04:45:53.400227 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 04:45:53.400238 | orchestrator | 2025-06-01 04:45:53.400249 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-01 04:45:53.400259 | orchestrator | Sunday 01 June 2025 04:45:05 +0000 (0:00:02.129) 0:00:05.292 *********** 2025-06-01 04:45:53.400270 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 04:45:53.400281 | orchestrator | 2025-06-01 04:45:53.400291 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-01 04:45:53.400302 | orchestrator | Sunday 01 June 2025 04:45:06 +0000 (0:00:00.682) 0:00:05.975 *********** 2025-06-01 04:45:53.400312 | orchestrator | ok: [testbed-manager] 2025-06-01 04:45:53.400323 | orchestrator | 2025-06-01 04:45:53.400334 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-01 04:45:53.400344 | orchestrator | Sunday 01 June 2025 04:45:06 +0000 (0:00:00.346) 0:00:06.321 *********** 2025-06-01 04:45:53.400355 | orchestrator | ok: [testbed-manager] 2025-06-01 04:45:53.400365 | orchestrator | 2025-06-01 04:45:53.400376 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:45:53.400387 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:45:53.400397 | orchestrator | 2025-06-01 04:45:53.400408 | orchestrator | 2025-06-01 04:45:53.400419 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:45:53.400429 | orchestrator | Sunday 01 June 2025 04:45:06 +0000 (0:00:00.241) 0:00:06.563 *********** 2025-06-01 04:45:53.400453 | orchestrator | =============================================================================== 2025-06-01 04:45:53.400464 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.13s 2025-06-01 04:45:53.400475 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.97s 2025-06-01 04:45:53.400486 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.68s 2025-06-01 04:45:53.400529 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.59s 2025-06-01 04:45:53.400542 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.50s 2025-06-01 04:45:53.400553 | orchestrator | Get home directory of operator user ------------------------------------- 0.49s 2025-06-01 04:45:53.400564 | orchestrator | Create .kube directory -------------------------------------------------- 0.47s 2025-06-01 04:45:53.400574 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2025-06-01 04:45:53.400585 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.24s 2025-06-01 04:45:53.400595 | orchestrator | 2025-06-01 04:45:53.400606 | orchestrator | 2025-06-01 04:45:53.400617 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-01 04:45:53.400627 | orchestrator | 2025-06-01 04:45:53.400638 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-01 04:45:53.400658 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:00.125) 0:00:00.125 *********** 2025-06-01 04:45:53.400670 | orchestrator | ok: [localhost] => { 2025-06-01 04:45:53.400681 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-01 04:45:53.400693 | orchestrator | } 2025-06-01 04:45:53.400704 | orchestrator | 2025-06-01 04:45:53.400715 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-01 04:45:53.400725 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:00.076) 0:00:00.202 *********** 2025-06-01 04:45:53.400737 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-01 04:45:53.400749 | orchestrator | ...ignoring 2025-06-01 04:45:53.400768 | orchestrator | 2025-06-01 04:45:53.400779 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-01 04:45:53.400789 | orchestrator | Sunday 01 June 2025 04:43:46 +0000 (0:00:03.692) 0:00:03.894 *********** 2025-06-01 04:45:53.400799 | orchestrator | skipping: [localhost] 2025-06-01 04:45:53.400810 | orchestrator | 2025-06-01 04:45:53.400821 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-01 04:45:53.400831 | orchestrator | Sunday 01 June 2025 04:43:46 +0000 (0:00:00.078) 0:00:03.972 *********** 2025-06-01 04:45:53.400863 | orchestrator | ok: [localhost] 2025-06-01 04:45:53.400874 | orchestrator | 2025-06-01 04:45:53.400885 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:45:53.400895 | orchestrator | 2025-06-01 04:45:53.400906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:45:53.400917 | orchestrator | Sunday 01 June 2025 04:43:46 +0000 (0:00:00.133) 0:00:04.106 *********** 2025-06-01 04:45:53.400927 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:45:53.400938 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:45:53.400948 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:45:53.400959 | orchestrator | 2025-06-01 04:45:53.400970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:45:53.400980 | orchestrator | Sunday 01 June 2025 04:43:46 +0000 (0:00:00.314) 0:00:04.421 *********** 2025-06-01 04:45:53.400991 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-01 04:45:53.401002 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-01 04:45:53.401013 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-01 04:45:53.401023 | orchestrator | 2025-06-01 04:45:53.401034 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-01 04:45:53.401044 | orchestrator | 2025-06-01 04:45:53.401055 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 04:45:53.401066 | orchestrator | Sunday 01 June 2025 04:43:47 +0000 (0:00:00.647) 0:00:05.069 *********** 2025-06-01 04:45:53.401076 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:45:53.401087 | orchestrator | 2025-06-01 04:45:53.401112 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-01 04:45:53.401134 | orchestrator | Sunday 01 June 2025 04:43:47 +0000 (0:00:00.738) 0:00:05.807 *********** 2025-06-01 04:45:53.401145 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:45:53.401156 | orchestrator | 2025-06-01 04:45:53.401166 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-01 04:45:53.401177 | orchestrator | Sunday 01 June 2025 04:43:48 +0000 (0:00:00.822) 0:00:06.630 *********** 2025-06-01 04:45:53.401187 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.401198 | orchestrator | 2025-06-01 04:45:53.401208 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-01 04:45:53.401219 | orchestrator | Sunday 01 June 2025 04:43:49 +0000 (0:00:00.313) 0:00:06.944 *********** 2025-06-01 04:45:53.401230 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.401240 | orchestrator | 2025-06-01 04:45:53.401251 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-01 04:45:53.401261 | orchestrator | Sunday 01 June 2025 04:43:49 +0000 (0:00:00.342) 0:00:07.286 *********** 2025-06-01 04:45:53.401272 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.401283 | orchestrator | 2025-06-01 04:45:53.401293 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-01 04:45:53.401304 | orchestrator | Sunday 01 June 2025 04:43:49 +0000 (0:00:00.337) 0:00:07.623 *********** 2025-06-01 04:45:53.401314 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.401324 | orchestrator | 2025-06-01 04:45:53.401335 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 04:45:53.401351 | orchestrator | Sunday 01 June 2025 04:43:50 +0000 (0:00:00.632) 0:00:08.256 *********** 2025-06-01 04:45:53.401369 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:45:53.401380 | orchestrator | 2025-06-01 04:45:53.401390 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-01 04:45:53.401407 | orchestrator | Sunday 01 June 2025 04:43:51 +0000 (0:00:01.330) 0:00:09.587 *********** 2025-06-01 04:45:53.401418 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:45:53.401429 | orchestrator | 2025-06-01 04:45:53.401439 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-01 04:45:53.401450 | orchestrator | Sunday 01 June 2025 04:43:52 +0000 (0:00:01.129) 0:00:10.717 *********** 2025-06-01 04:45:53.401460 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.401471 | orchestrator | 2025-06-01 04:45:53.401481 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-01 04:45:53.401492 | orchestrator | Sunday 01 June 2025 04:43:53 +0000 (0:00:00.746) 0:00:11.463 *********** 2025-06-01 04:45:53.401502 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.401535 | orchestrator | 2025-06-01 04:45:53.401546 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-01 04:45:53.401557 | orchestrator | Sunday 01 June 2025 04:43:54 +0000 (0:00:00.447) 0:00:11.911 *********** 2025-06-01 04:45:53.401573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.401590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.401603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.401622 | orchestrator | 2025-06-01 04:45:53.401638 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-01 04:45:53.401663 | orchestrator | Sunday 01 June 2025 04:43:55 +0000 (0:00:01.158) 0:00:13.070 *********** 2025-06-01 04:45:53.401683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.401696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.401708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.401720 | orchestrator | 2025-06-01 04:45:53.401731 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-01 04:45:53.401748 | orchestrator | Sunday 01 June 2025 04:43:56 +0000 (0:00:01.510) 0:00:14.580 *********** 2025-06-01 04:45:53.401759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 04:45:53.401770 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 04:45:53.401781 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 04:45:53.401792 | orchestrator | 2025-06-01 04:45:53.401802 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-01 04:45:53.401813 | orchestrator | Sunday 01 June 2025 04:43:58 +0000 (0:00:01.371) 0:00:15.952 *********** 2025-06-01 04:45:53.401824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 04:45:53.401839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 04:45:53.401850 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 04:45:53.401861 | orchestrator | 2025-06-01 04:45:53.401871 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-01 04:45:53.401887 | orchestrator | Sunday 01 June 2025 04:43:59 +0000 (0:00:01.651) 0:00:17.603 *********** 2025-06-01 04:45:53.401898 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 04:45:53.401909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 04:45:53.401920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 04:45:53.401930 | orchestrator | 2025-06-01 04:45:53.401941 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-01 04:45:53.401952 | orchestrator | Sunday 01 June 2025 04:44:00 +0000 (0:00:01.248) 0:00:18.851 *********** 2025-06-01 04:45:53.401962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 04:45:53.401973 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 04:45:53.401984 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 04:45:53.401994 | orchestrator | 2025-06-01 04:45:53.402005 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-01 04:45:53.402073 | orchestrator | Sunday 01 June 2025 04:44:03 +0000 (0:00:02.232) 0:00:21.084 *********** 2025-06-01 04:45:53.402087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 04:45:53.402098 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 04:45:53.402109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 04:45:53.402120 | orchestrator | 2025-06-01 04:45:53.402131 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-01 04:45:53.402141 | orchestrator | Sunday 01 June 2025 04:44:04 +0000 (0:00:01.592) 0:00:22.677 *********** 2025-06-01 04:45:53.402152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 04:45:53.402163 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 04:45:53.402173 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 04:45:53.402184 | orchestrator | 2025-06-01 04:45:53.402194 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 04:45:53.402205 | orchestrator | Sunday 01 June 2025 04:44:06 +0000 (0:00:01.923) 0:00:24.600 *********** 2025-06-01 04:45:53.402216 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.402227 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:45:53.402252 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:45:53.402263 | orchestrator | 2025-06-01 04:45:53.402274 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-01 04:45:53.402285 | orchestrator | Sunday 01 June 2025 04:44:07 +0000 (0:00:00.436) 0:00:25.036 *********** 2025-06-01 04:45:53.402297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.402322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.402336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:45:53.402347 | orchestrator | 2025-06-01 04:45:53.402358 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-01 04:45:53.402368 | orchestrator | Sunday 01 June 2025 04:44:08 +0000 (0:00:01.309) 0:00:26.345 *********** 2025-06-01 04:45:53.402379 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:45:53.402390 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:45:53.402400 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:45:53.402411 | orchestrator | 2025-06-01 04:45:53.402421 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-01 04:45:53.402439 | orchestrator | Sunday 01 June 2025 04:44:09 +0000 (0:00:00.957) 0:00:27.303 *********** 2025-06-01 04:45:53.402450 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:45:53.402460 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:45:53.402485 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:45:53.402496 | orchestrator | 2025-06-01 04:45:53.402507 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-01 04:45:53.402570 | orchestrator | Sunday 01 June 2025 04:44:16 +0000 (0:00:07.544) 0:00:34.847 *********** 2025-06-01 04:45:53.402581 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:45:53.402592 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:45:53.402603 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:45:53.402613 | orchestrator | 2025-06-01 04:45:53.402624 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 04:45:53.402635 | orchestrator | 2025-06-01 04:45:53.402645 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 04:45:53.402656 | orchestrator | Sunday 01 June 2025 04:44:17 +0000 (0:00:00.373) 0:00:35.221 *********** 2025-06-01 04:45:53.402667 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:45:53.402677 | orchestrator | 2025-06-01 04:45:53.402688 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 04:45:53.402699 | orchestrator | Sunday 01 June 2025 04:44:17 +0000 (0:00:00.595) 0:00:35.817 *********** 2025-06-01 04:45:53.402710 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:45:53.402721 | orchestrator | 2025-06-01 04:45:53.402732 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 04:45:53.402742 | orchestrator | Sunday 01 June 2025 04:44:18 +0000 (0:00:00.189) 0:00:36.006 *********** 2025-06-01 04:45:53.402753 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:45:53.402764 | orchestrator | 2025-06-01 04:45:53.402775 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 04:45:53.402785 | orchestrator | Sunday 01 June 2025 04:44:19 +0000 (0:00:01.602) 0:00:37.609 *********** 2025-06-01 04:45:53.402796 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:45:53.402807 | orchestrator | 2025-06-01 04:45:53.402817 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 04:45:53.402827 | orchestrator | 2025-06-01 04:45:53.402836 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 04:45:53.402846 | orchestrator | Sunday 01 June 2025 04:45:14 +0000 (0:00:54.255) 0:01:31.864 *********** 2025-06-01 04:45:53.402855 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:45:53.402865 | orchestrator | 2025-06-01 04:45:53.402874 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 04:45:53.402884 | orchestrator | Sunday 01 June 2025 04:45:14 +0000 (0:00:00.579) 0:01:32.443 *********** 2025-06-01 04:45:53.402893 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:45:53.402903 | orchestrator | 2025-06-01 04:45:53.402912 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 04:45:53.402922 | orchestrator | Sunday 01 June 2025 04:45:14 +0000 (0:00:00.371) 0:01:32.815 *********** 2025-06-01 04:45:53.402931 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:45:53.402941 | orchestrator | 2025-06-01 04:45:53.402950 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 04:45:53.402960 | orchestrator | Sunday 01 June 2025 04:45:21 +0000 (0:00:06.882) 0:01:39.697 *********** 2025-06-01 04:45:53.402969 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:45:53.402979 | orchestrator | 2025-06-01 04:45:53.402993 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 04:45:53.403003 | orchestrator | 2025-06-01 04:45:53.403012 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 04:45:53.403022 | orchestrator | Sunday 01 June 2025 04:45:31 +0000 (0:00:09.790) 0:01:49.488 *********** 2025-06-01 04:45:53.403031 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:45:53.403041 | orchestrator | 2025-06-01 04:45:53.403062 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 04:45:53.403073 | orchestrator | Sunday 01 June 2025 04:45:32 +0000 (0:00:00.632) 0:01:50.120 *********** 2025-06-01 04:45:53.403082 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:45:53.403091 | orchestrator | 2025-06-01 04:45:53.403101 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 04:45:53.403122 | orchestrator | Sunday 01 June 2025 04:45:32 +0000 (0:00:00.237) 0:01:50.357 *********** 2025-06-01 04:45:53.403131 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:45:53.403141 | orchestrator | 2025-06-01 04:45:53.403150 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 04:45:53.403160 | orchestrator | Sunday 01 June 2025 04:45:39 +0000 (0:00:07.134) 0:01:57.492 *********** 2025-06-01 04:45:53.403169 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:45:53.403179 | orchestrator | 2025-06-01 04:45:53.403188 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-01 04:45:53.403198 | orchestrator | 2025-06-01 04:45:53.403207 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-01 04:45:53.403217 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:09.103) 0:02:06.596 *********** 2025-06-01 04:45:53.403226 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:45:53.403236 | orchestrator | 2025-06-01 04:45:53.403245 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-01 04:45:53.403255 | orchestrator | Sunday 01 June 2025 04:45:49 +0000 (0:00:01.038) 0:02:07.634 *********** 2025-06-01 04:45:53.403264 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 04:45:53.403274 | orchestrator | enable_outward_rabbitmq_True 2025-06-01 04:45:53.403284 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 04:45:53.403293 | orchestrator | outward_rabbitmq_restart 2025-06-01 04:45:53.403303 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:45:53.403312 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:45:53.403322 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:45:53.403331 | orchestrator | 2025-06-01 04:45:53.403340 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-01 04:45:53.403350 | orchestrator | skipping: no hosts matched 2025-06-01 04:45:53.403359 | orchestrator | 2025-06-01 04:45:53.403369 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-01 04:45:53.403389 | orchestrator | skipping: no hosts matched 2025-06-01 04:45:53.403398 | orchestrator | 2025-06-01 04:45:53.403408 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-01 04:45:53.403418 | orchestrator | skipping: no hosts matched 2025-06-01 04:45:53.403427 | orchestrator | 2025-06-01 04:45:53.403436 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:45:53.403446 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-01 04:45:53.403457 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 04:45:53.403466 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:45:53.403476 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:45:53.403485 | orchestrator | 2025-06-01 04:45:53.403495 | orchestrator | 2025-06-01 04:45:53.403504 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:45:53.403532 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:02.333) 0:02:09.968 *********** 2025-06-01 04:45:53.403541 | orchestrator | =============================================================================== 2025-06-01 04:45:53.403557 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 73.15s 2025-06-01 04:45:53.403567 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.62s 2025-06-01 04:45:53.403577 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.54s 2025-06-01 04:45:53.403586 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.69s 2025-06-01 04:45:53.403596 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.33s 2025-06-01 04:45:53.403605 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.23s 2025-06-01 04:45:53.403615 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.92s 2025-06-01 04:45:53.403624 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.81s 2025-06-01 04:45:53.403634 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.65s 2025-06-01 04:45:53.403643 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.59s 2025-06-01 04:45:53.403653 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.51s 2025-06-01 04:45:53.403662 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.37s 2025-06-01 04:45:53.403672 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.33s 2025-06-01 04:45:53.403686 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.31s 2025-06-01 04:45:53.403696 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.25s 2025-06-01 04:45:53.403705 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.16s 2025-06-01 04:45:53.403715 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.13s 2025-06-01 04:45:53.403729 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.04s 2025-06-01 04:45:53.403738 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.96s 2025-06-01 04:45:53.403748 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.82s 2025-06-01 04:45:53.403758 | orchestrator | 2025-06-01 04:45:53 | INFO  | Task 73ff51af-daae-4e79-8d9d-5879ffeb99e1 is in state SUCCESS 2025-06-01 04:45:53.403768 | orchestrator | 2025-06-01 04:45:53 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:53.403777 | orchestrator | 2025-06-01 04:45:53 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:53.403787 | orchestrator | 2025-06-01 04:45:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:56.433198 | orchestrator | 2025-06-01 04:45:56 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:56.434739 | orchestrator | 2025-06-01 04:45:56 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:56.440206 | orchestrator | 2025-06-01 04:45:56 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:56.440260 | orchestrator | 2025-06-01 04:45:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:45:59.472229 | orchestrator | 2025-06-01 04:45:59 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:45:59.473825 | orchestrator | 2025-06-01 04:45:59 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:45:59.475342 | orchestrator | 2025-06-01 04:45:59 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:45:59.475736 | orchestrator | 2025-06-01 04:45:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:02.520303 | orchestrator | 2025-06-01 04:46:02 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:02.521492 | orchestrator | 2025-06-01 04:46:02 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:02.523141 | orchestrator | 2025-06-01 04:46:02 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:02.523367 | orchestrator | 2025-06-01 04:46:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:05.568434 | orchestrator | 2025-06-01 04:46:05 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:05.568596 | orchestrator | 2025-06-01 04:46:05 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:05.568614 | orchestrator | 2025-06-01 04:46:05 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:05.568626 | orchestrator | 2025-06-01 04:46:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:08.609827 | orchestrator | 2025-06-01 04:46:08 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:08.610771 | orchestrator | 2025-06-01 04:46:08 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:08.615554 | orchestrator | 2025-06-01 04:46:08 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:08.615616 | orchestrator | 2025-06-01 04:46:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:11.658489 | orchestrator | 2025-06-01 04:46:11 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:11.659539 | orchestrator | 2025-06-01 04:46:11 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:11.661405 | orchestrator | 2025-06-01 04:46:11 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:11.661445 | orchestrator | 2025-06-01 04:46:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:14.702811 | orchestrator | 2025-06-01 04:46:14 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:14.704294 | orchestrator | 2025-06-01 04:46:14 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:14.705888 | orchestrator | 2025-06-01 04:46:14 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:14.705915 | orchestrator | 2025-06-01 04:46:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:17.769312 | orchestrator | 2025-06-01 04:46:17 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:17.769421 | orchestrator | 2025-06-01 04:46:17 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:17.769712 | orchestrator | 2025-06-01 04:46:17 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:17.769737 | orchestrator | 2025-06-01 04:46:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:20.805796 | orchestrator | 2025-06-01 04:46:20 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:20.807904 | orchestrator | 2025-06-01 04:46:20 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:20.813444 | orchestrator | 2025-06-01 04:46:20 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:20.813528 | orchestrator | 2025-06-01 04:46:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:23.847272 | orchestrator | 2025-06-01 04:46:23 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:23.847388 | orchestrator | 2025-06-01 04:46:23 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:23.850627 | orchestrator | 2025-06-01 04:46:23 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:23.850674 | orchestrator | 2025-06-01 04:46:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:26.899458 | orchestrator | 2025-06-01 04:46:26 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:26.901792 | orchestrator | 2025-06-01 04:46:26 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:26.903087 | orchestrator | 2025-06-01 04:46:26 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:26.903138 | orchestrator | 2025-06-01 04:46:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:29.943834 | orchestrator | 2025-06-01 04:46:29 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:29.944010 | orchestrator | 2025-06-01 04:46:29 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:29.944037 | orchestrator | 2025-06-01 04:46:29 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:29.944049 | orchestrator | 2025-06-01 04:46:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:32.981392 | orchestrator | 2025-06-01 04:46:32 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:32.983655 | orchestrator | 2025-06-01 04:46:32 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:32.986478 | orchestrator | 2025-06-01 04:46:32 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:32.987114 | orchestrator | 2025-06-01 04:46:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:36.052636 | orchestrator | 2025-06-01 04:46:36 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:36.052965 | orchestrator | 2025-06-01 04:46:36 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:36.056873 | orchestrator | 2025-06-01 04:46:36 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:36.056937 | orchestrator | 2025-06-01 04:46:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:39.105690 | orchestrator | 2025-06-01 04:46:39 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:39.107458 | orchestrator | 2025-06-01 04:46:39 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:39.111105 | orchestrator | 2025-06-01 04:46:39 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:39.111339 | orchestrator | 2025-06-01 04:46:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:42.157462 | orchestrator | 2025-06-01 04:46:42 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:42.159055 | orchestrator | 2025-06-01 04:46:42 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:42.159911 | orchestrator | 2025-06-01 04:46:42 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:42.159956 | orchestrator | 2025-06-01 04:46:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:45.208975 | orchestrator | 2025-06-01 04:46:45 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:45.209388 | orchestrator | 2025-06-01 04:46:45 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:45.210218 | orchestrator | 2025-06-01 04:46:45 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:45.210452 | orchestrator | 2025-06-01 04:46:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:48.290930 | orchestrator | 2025-06-01 04:46:48 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:48.292085 | orchestrator | 2025-06-01 04:46:48 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:48.292164 | orchestrator | 2025-06-01 04:46:48 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:48.292337 | orchestrator | 2025-06-01 04:46:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:51.339108 | orchestrator | 2025-06-01 04:46:51 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:51.340262 | orchestrator | 2025-06-01 04:46:51 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:51.341700 | orchestrator | 2025-06-01 04:46:51 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:51.341726 | orchestrator | 2025-06-01 04:46:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:54.406417 | orchestrator | 2025-06-01 04:46:54 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state STARTED 2025-06-01 04:46:54.408825 | orchestrator | 2025-06-01 04:46:54 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:54.410488 | orchestrator | 2025-06-01 04:46:54 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:54.410517 | orchestrator | 2025-06-01 04:46:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:46:57.457838 | orchestrator | 2025-06-01 04:46:57 | INFO  | Task d1e80bc7-ff22-4a49-96d4-ec987eea57f2 is in state SUCCESS 2025-06-01 04:46:57.458962 | orchestrator | 2025-06-01 04:46:57.459019 | orchestrator | 2025-06-01 04:46:57.459052 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:46:57.459076 | orchestrator | 2025-06-01 04:46:57.459096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:46:57.459116 | orchestrator | Sunday 01 June 2025 04:44:26 +0000 (0:00:00.297) 0:00:00.297 *********** 2025-06-01 04:46:57.459137 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.459158 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.459177 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.459196 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:46:57.459215 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:46:57.459235 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:46:57.459255 | orchestrator | 2025-06-01 04:46:57.459275 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:46:57.459303 | orchestrator | Sunday 01 June 2025 04:44:27 +0000 (0:00:01.105) 0:00:01.402 *********** 2025-06-01 04:46:57.459323 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-01 04:46:57.459342 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-01 04:46:57.459361 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-01 04:46:57.459380 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-01 04:46:57.459399 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-01 04:46:57.459419 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-01 04:46:57.459440 | orchestrator | 2025-06-01 04:46:57.459459 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-01 04:46:57.459519 | orchestrator | 2025-06-01 04:46:57.459538 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-01 04:46:57.459556 | orchestrator | Sunday 01 June 2025 04:44:28 +0000 (0:00:00.804) 0:00:02.207 *********** 2025-06-01 04:46:57.459575 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:46:57.459645 | orchestrator | 2025-06-01 04:46:57.459665 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-01 04:46:57.459682 | orchestrator | Sunday 01 June 2025 04:44:29 +0000 (0:00:01.075) 0:00:03.283 *********** 2025-06-01 04:46:57.459697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459784 | orchestrator | 2025-06-01 04:46:57.459810 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-01 04:46:57.459822 | orchestrator | Sunday 01 June 2025 04:44:31 +0000 (0:00:01.541) 0:00:04.824 *********** 2025-06-01 04:46:57.459833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.459928 | orchestrator | 2025-06-01 04:46:57.459947 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-01 04:46:57.459965 | orchestrator | Sunday 01 June 2025 04:44:32 +0000 (0:00:01.439) 0:00:06.264 *********** 2025-06-01 04:46:57.459983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460079 | orchestrator | 2025-06-01 04:46:57.460089 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-01 04:46:57.460100 | orchestrator | Sunday 01 June 2025 04:44:33 +0000 (0:00:01.114) 0:00:07.379 *********** 2025-06-01 04:46:57.460111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460183 | orchestrator | 2025-06-01 04:46:57.460199 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-01 04:46:57.460211 | orchestrator | Sunday 01 June 2025 04:44:35 +0000 (0:00:01.432) 0:00:08.811 *********** 2025-06-01 04:46:57.460230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.460302 | orchestrator | 2025-06-01 04:46:57.460313 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-01 04:46:57.460324 | orchestrator | Sunday 01 June 2025 04:44:37 +0000 (0:00:02.127) 0:00:10.938 *********** 2025-06-01 04:46:57.460335 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.460346 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.460357 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.460367 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:46:57.460378 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:46:57.460388 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:46:57.460399 | orchestrator | 2025-06-01 04:46:57.460410 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-01 04:46:57.460420 | orchestrator | Sunday 01 June 2025 04:44:40 +0000 (0:00:02.683) 0:00:13.622 *********** 2025-06-01 04:46:57.460431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-01 04:46:57.460442 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-01 04:46:57.460452 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-01 04:46:57.460463 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-01 04:46:57.460529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-01 04:46:57.460541 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-01 04:46:57.460551 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 04:46:57.460562 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 04:46:57.460579 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 04:46:57.460590 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 04:46:57.460601 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 04:46:57.460612 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 04:46:57.460623 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 04:46:57.460635 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 04:46:57.460645 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 04:46:57.460656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 04:46:57.460667 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 04:46:57.460678 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 04:46:57.460688 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 04:46:57.460701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 04:46:57.460711 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 04:46:57.460722 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 04:46:57.460733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 04:46:57.460743 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 04:46:57.460754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 04:46:57.460765 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 04:46:57.460781 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 04:46:57.460792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 04:46:57.460803 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 04:46:57.460814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 04:46:57.460824 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 04:46:57.460835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 04:46:57.460845 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 04:46:57.460856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 04:46:57.460874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 04:46:57.460884 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 04:46:57.460895 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 04:46:57.460906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 04:46:57.460917 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 04:46:57.460928 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-01 04:46:57.460939 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 04:46:57.460949 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 04:46:57.460960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 04:46:57.460970 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-01 04:46:57.460987 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-01 04:46:57.460998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 04:46:57.461009 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-01 04:46:57.461020 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-01 04:46:57.461031 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-01 04:46:57.461042 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 04:46:57.461052 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 04:46:57.461063 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 04:46:57.461074 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 04:46:57.461085 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 04:46:57.461095 | orchestrator | 2025-06-01 04:46:57.461106 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 04:46:57.461117 | orchestrator | Sunday 01 June 2025 04:44:58 +0000 (0:00:18.345) 0:00:31.967 *********** 2025-06-01 04:46:57.461127 | orchestrator | 2025-06-01 04:46:57.461138 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 04:46:57.461149 | orchestrator | Sunday 01 June 2025 04:44:58 +0000 (0:00:00.126) 0:00:32.093 *********** 2025-06-01 04:46:57.461159 | orchestrator | 2025-06-01 04:46:57.461170 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 04:46:57.461180 | orchestrator | Sunday 01 June 2025 04:44:58 +0000 (0:00:00.126) 0:00:32.220 *********** 2025-06-01 04:46:57.461191 | orchestrator | 2025-06-01 04:46:57.461202 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 04:46:57.461219 | orchestrator | Sunday 01 June 2025 04:44:58 +0000 (0:00:00.126) 0:00:32.346 *********** 2025-06-01 04:46:57.461230 | orchestrator | 2025-06-01 04:46:57.461241 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 04:46:57.461251 | orchestrator | Sunday 01 June 2025 04:44:58 +0000 (0:00:00.124) 0:00:32.471 *********** 2025-06-01 04:46:57.461262 | orchestrator | 2025-06-01 04:46:57.461278 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 04:46:57.461288 | orchestrator | Sunday 01 June 2025 04:44:59 +0000 (0:00:00.136) 0:00:32.607 *********** 2025-06-01 04:46:57.461299 | orchestrator | 2025-06-01 04:46:57.461310 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-01 04:46:57.461320 | orchestrator | Sunday 01 June 2025 04:44:59 +0000 (0:00:00.130) 0:00:32.738 *********** 2025-06-01 04:46:57.461331 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.461342 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.461352 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.461363 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:46:57.461374 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:46:57.461385 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:46:57.461395 | orchestrator | 2025-06-01 04:46:57.461406 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-01 04:46:57.461417 | orchestrator | Sunday 01 June 2025 04:45:01 +0000 (0:00:01.960) 0:00:34.699 *********** 2025-06-01 04:46:57.461428 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.461438 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.461449 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:46:57.461459 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:46:57.461494 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:46:57.461505 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.461516 | orchestrator | 2025-06-01 04:46:57.461527 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-01 04:46:57.461542 | orchestrator | 2025-06-01 04:46:57.461560 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 04:46:57.461578 | orchestrator | Sunday 01 June 2025 04:45:40 +0000 (0:00:39.077) 0:01:13.776 *********** 2025-06-01 04:46:57.461595 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:46:57.461613 | orchestrator | 2025-06-01 04:46:57.461631 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 04:46:57.461652 | orchestrator | Sunday 01 June 2025 04:45:40 +0000 (0:00:00.549) 0:01:14.326 *********** 2025-06-01 04:46:57.461672 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:46:57.461691 | orchestrator | 2025-06-01 04:46:57.461710 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-01 04:46:57.461728 | orchestrator | Sunday 01 June 2025 04:45:41 +0000 (0:00:00.653) 0:01:14.979 *********** 2025-06-01 04:46:57.461747 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.461765 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.461784 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.461796 | orchestrator | 2025-06-01 04:46:57.461806 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-01 04:46:57.461817 | orchestrator | Sunday 01 June 2025 04:45:42 +0000 (0:00:00.737) 0:01:15.716 *********** 2025-06-01 04:46:57.461827 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.461838 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.461848 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.461866 | orchestrator | 2025-06-01 04:46:57.461878 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-01 04:46:57.461888 | orchestrator | Sunday 01 June 2025 04:45:42 +0000 (0:00:00.492) 0:01:16.209 *********** 2025-06-01 04:46:57.461899 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.461909 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.461920 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.461940 | orchestrator | 2025-06-01 04:46:57.461951 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-01 04:46:57.461961 | orchestrator | Sunday 01 June 2025 04:45:43 +0000 (0:00:00.379) 0:01:16.588 *********** 2025-06-01 04:46:57.461972 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.461982 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.461993 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.462003 | orchestrator | 2025-06-01 04:46:57.462061 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-01 04:46:57.462076 | orchestrator | Sunday 01 June 2025 04:45:43 +0000 (0:00:00.661) 0:01:17.250 *********** 2025-06-01 04:46:57.462087 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.462098 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.462108 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.462119 | orchestrator | 2025-06-01 04:46:57.462130 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-01 04:46:57.462140 | orchestrator | Sunday 01 June 2025 04:45:44 +0000 (0:00:00.657) 0:01:17.907 *********** 2025-06-01 04:46:57.462151 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462162 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462172 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462183 | orchestrator | 2025-06-01 04:46:57.462193 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-01 04:46:57.462204 | orchestrator | Sunday 01 June 2025 04:45:44 +0000 (0:00:00.434) 0:01:18.342 *********** 2025-06-01 04:46:57.462214 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462225 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462235 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462246 | orchestrator | 2025-06-01 04:46:57.462256 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-01 04:46:57.462267 | orchestrator | Sunday 01 June 2025 04:45:45 +0000 (0:00:00.423) 0:01:18.765 *********** 2025-06-01 04:46:57.462278 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462288 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462299 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462309 | orchestrator | 2025-06-01 04:46:57.462322 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-01 04:46:57.462336 | orchestrator | Sunday 01 June 2025 04:45:45 +0000 (0:00:00.739) 0:01:19.505 *********** 2025-06-01 04:46:57.462348 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462360 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462373 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462384 | orchestrator | 2025-06-01 04:46:57.462394 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-01 04:46:57.462405 | orchestrator | Sunday 01 June 2025 04:45:46 +0000 (0:00:00.342) 0:01:19.848 *********** 2025-06-01 04:46:57.462416 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462439 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462450 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462461 | orchestrator | 2025-06-01 04:46:57.462531 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-01 04:46:57.462543 | orchestrator | Sunday 01 June 2025 04:45:46 +0000 (0:00:00.299) 0:01:20.147 *********** 2025-06-01 04:46:57.462553 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462564 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462575 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462585 | orchestrator | 2025-06-01 04:46:57.462596 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-01 04:46:57.462607 | orchestrator | Sunday 01 June 2025 04:45:46 +0000 (0:00:00.312) 0:01:20.460 *********** 2025-06-01 04:46:57.462617 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462628 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462638 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462649 | orchestrator | 2025-06-01 04:46:57.462659 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-01 04:46:57.462678 | orchestrator | Sunday 01 June 2025 04:45:47 +0000 (0:00:00.715) 0:01:21.175 *********** 2025-06-01 04:46:57.462689 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462698 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462708 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462717 | orchestrator | 2025-06-01 04:46:57.462726 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-01 04:46:57.462736 | orchestrator | Sunday 01 June 2025 04:45:47 +0000 (0:00:00.314) 0:01:21.489 *********** 2025-06-01 04:46:57.462745 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462755 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462764 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462773 | orchestrator | 2025-06-01 04:46:57.462783 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-01 04:46:57.462793 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:00.467) 0:01:21.957 *********** 2025-06-01 04:46:57.462802 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462812 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462821 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462830 | orchestrator | 2025-06-01 04:46:57.462840 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-01 04:46:57.462849 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:00.328) 0:01:22.285 *********** 2025-06-01 04:46:57.462859 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462868 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462877 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462887 | orchestrator | 2025-06-01 04:46:57.462896 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-01 04:46:57.462905 | orchestrator | Sunday 01 June 2025 04:45:49 +0000 (0:00:00.790) 0:01:23.075 *********** 2025-06-01 04:46:57.462915 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.462925 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.462942 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.462951 | orchestrator | 2025-06-01 04:46:57.462966 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 04:46:57.462982 | orchestrator | Sunday 01 June 2025 04:45:49 +0000 (0:00:00.364) 0:01:23.439 *********** 2025-06-01 04:46:57.462998 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:46:57.463014 | orchestrator | 2025-06-01 04:46:57.463030 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-01 04:46:57.463044 | orchestrator | Sunday 01 June 2025 04:45:50 +0000 (0:00:00.678) 0:01:24.118 *********** 2025-06-01 04:46:57.463054 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.463063 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.463077 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.463092 | orchestrator | 2025-06-01 04:46:57.463107 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-01 04:46:57.463124 | orchestrator | Sunday 01 June 2025 04:45:51 +0000 (0:00:00.995) 0:01:25.114 *********** 2025-06-01 04:46:57.463139 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.463157 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.463167 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.463176 | orchestrator | 2025-06-01 04:46:57.463185 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-01 04:46:57.463195 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:00.561) 0:01:25.675 *********** 2025-06-01 04:46:57.463204 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.463214 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.463223 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.463232 | orchestrator | 2025-06-01 04:46:57.463242 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-01 04:46:57.463251 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:00.416) 0:01:26.092 *********** 2025-06-01 04:46:57.463270 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.463279 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.463289 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.463298 | orchestrator | 2025-06-01 04:46:57.463308 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-01 04:46:57.463317 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:00.409) 0:01:26.501 *********** 2025-06-01 04:46:57.463326 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.463336 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.463345 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.463355 | orchestrator | 2025-06-01 04:46:57.463364 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-01 04:46:57.463374 | orchestrator | Sunday 01 June 2025 04:45:53 +0000 (0:00:00.686) 0:01:27.188 *********** 2025-06-01 04:46:57.463383 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.463392 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.463402 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.463411 | orchestrator | 2025-06-01 04:46:57.463420 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-01 04:46:57.463435 | orchestrator | Sunday 01 June 2025 04:45:54 +0000 (0:00:00.378) 0:01:27.566 *********** 2025-06-01 04:46:57.463445 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.463454 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.463464 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.463501 | orchestrator | 2025-06-01 04:46:57.463511 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-01 04:46:57.463521 | orchestrator | Sunday 01 June 2025 04:45:54 +0000 (0:00:00.371) 0:01:27.938 *********** 2025-06-01 04:46:57.463530 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.463539 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.463549 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.463558 | orchestrator | 2025-06-01 04:46:57.463568 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-01 04:46:57.463577 | orchestrator | Sunday 01 June 2025 04:45:54 +0000 (0:00:00.340) 0:01:28.279 *********** 2025-06-01 04:46:57.463588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463697 | orchestrator | 2025-06-01 04:46:57.463711 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-01 04:46:57.463722 | orchestrator | Sunday 01 June 2025 04:45:56 +0000 (0:00:01.731) 0:01:30.010 *********** 2025-06-01 04:46:57.463732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463832 | orchestrator | 2025-06-01 04:46:57.463842 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-01 04:46:57.463851 | orchestrator | Sunday 01 June 2025 04:46:00 +0000 (0:00:04.092) 0:01:34.102 *********** 2025-06-01 04:46:57.463865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.463982 | orchestrator | 2025-06-01 04:46:57.463992 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 04:46:57.464001 | orchestrator | Sunday 01 June 2025 04:46:02 +0000 (0:00:01.904) 0:01:36.007 *********** 2025-06-01 04:46:57.464011 | orchestrator | 2025-06-01 04:46:57.464020 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 04:46:57.464030 | orchestrator | Sunday 01 June 2025 04:46:02 +0000 (0:00:00.091) 0:01:36.098 *********** 2025-06-01 04:46:57.464039 | orchestrator | 2025-06-01 04:46:57.464049 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 04:46:57.464058 | orchestrator | Sunday 01 June 2025 04:46:02 +0000 (0:00:00.067) 0:01:36.165 *********** 2025-06-01 04:46:57.464068 | orchestrator | 2025-06-01 04:46:57.464077 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-01 04:46:57.464086 | orchestrator | Sunday 01 June 2025 04:46:02 +0000 (0:00:00.064) 0:01:36.230 *********** 2025-06-01 04:46:57.464096 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.464105 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.464115 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.464124 | orchestrator | 2025-06-01 04:46:57.464138 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-01 04:46:57.464148 | orchestrator | Sunday 01 June 2025 04:46:05 +0000 (0:00:02.623) 0:01:38.853 *********** 2025-06-01 04:46:57.464157 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.464167 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.464176 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.464185 | orchestrator | 2025-06-01 04:46:57.464195 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-01 04:46:57.464204 | orchestrator | Sunday 01 June 2025 04:46:13 +0000 (0:00:08.045) 0:01:46.899 *********** 2025-06-01 04:46:57.464214 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.464223 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.464232 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.464242 | orchestrator | 2025-06-01 04:46:57.464252 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-01 04:46:57.464268 | orchestrator | Sunday 01 June 2025 04:46:16 +0000 (0:00:02.816) 0:01:49.715 *********** 2025-06-01 04:46:57.464277 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.464287 | orchestrator | 2025-06-01 04:46:57.464296 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-01 04:46:57.464306 | orchestrator | Sunday 01 June 2025 04:46:16 +0000 (0:00:00.133) 0:01:49.849 *********** 2025-06-01 04:46:57.464315 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.464325 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.464334 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.464343 | orchestrator | 2025-06-01 04:46:57.464353 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-01 04:46:57.464362 | orchestrator | Sunday 01 June 2025 04:46:17 +0000 (0:00:00.975) 0:01:50.824 *********** 2025-06-01 04:46:57.464372 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.464381 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.464390 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.464400 | orchestrator | 2025-06-01 04:46:57.464409 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-01 04:46:57.464419 | orchestrator | Sunday 01 June 2025 04:46:18 +0000 (0:00:00.900) 0:01:51.725 *********** 2025-06-01 04:46:57.464428 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.464438 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.464447 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.464457 | orchestrator | 2025-06-01 04:46:57.464515 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-01 04:46:57.464528 | orchestrator | Sunday 01 June 2025 04:46:19 +0000 (0:00:00.867) 0:01:52.592 *********** 2025-06-01 04:46:57.464538 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.464547 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.464557 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.464566 | orchestrator | 2025-06-01 04:46:57.464576 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-01 04:46:57.464585 | orchestrator | Sunday 01 June 2025 04:46:19 +0000 (0:00:00.659) 0:01:53.252 *********** 2025-06-01 04:46:57.464595 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.464604 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.464619 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.464629 | orchestrator | 2025-06-01 04:46:57.464639 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-01 04:46:57.464648 | orchestrator | Sunday 01 June 2025 04:46:20 +0000 (0:00:00.735) 0:01:53.987 *********** 2025-06-01 04:46:57.464657 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.464667 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.464676 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.464685 | orchestrator | 2025-06-01 04:46:57.464695 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-01 04:46:57.464704 | orchestrator | Sunday 01 June 2025 04:46:21 +0000 (0:00:01.161) 0:01:55.149 *********** 2025-06-01 04:46:57.464714 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.464723 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.464732 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.464742 | orchestrator | 2025-06-01 04:46:57.464751 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-01 04:46:57.464761 | orchestrator | Sunday 01 June 2025 04:46:21 +0000 (0:00:00.309) 0:01:55.458 *********** 2025-06-01 04:46:57.464771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464780 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464815 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464823 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464832 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464861 | orchestrator | 2025-06-01 04:46:57.464869 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-01 04:46:57.464877 | orchestrator | Sunday 01 June 2025 04:46:23 +0000 (0:00:01.416) 0:01:56.875 *********** 2025-06-01 04:46:57.464885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464906 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.464978 | orchestrator | 2025-06-01 04:46:57.464990 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-01 04:46:57.465004 | orchestrator | Sunday 01 June 2025 04:46:27 +0000 (0:00:03.782) 0:02:00.657 *********** 2025-06-01 04:46:57.465025 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465038 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465064 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:46:57.465211 | orchestrator | 2025-06-01 04:46:57.465224 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 04:46:57.465238 | orchestrator | Sunday 01 June 2025 04:46:29 +0000 (0:00:02.848) 0:02:03.506 *********** 2025-06-01 04:46:57.465246 | orchestrator | 2025-06-01 04:46:57.465254 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 04:46:57.465262 | orchestrator | Sunday 01 June 2025 04:46:30 +0000 (0:00:00.080) 0:02:03.587 *********** 2025-06-01 04:46:57.465270 | orchestrator | 2025-06-01 04:46:57.465278 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 04:46:57.465285 | orchestrator | Sunday 01 June 2025 04:46:30 +0000 (0:00:00.066) 0:02:03.653 *********** 2025-06-01 04:46:57.465293 | orchestrator | 2025-06-01 04:46:57.465301 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-01 04:46:57.465309 | orchestrator | Sunday 01 June 2025 04:46:30 +0000 (0:00:00.066) 0:02:03.720 *********** 2025-06-01 04:46:57.465317 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.465336 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.465349 | orchestrator | 2025-06-01 04:46:57.465370 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-01 04:46:57.465383 | orchestrator | Sunday 01 June 2025 04:46:36 +0000 (0:00:06.276) 0:02:09.997 *********** 2025-06-01 04:46:57.465396 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.465409 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.465423 | orchestrator | 2025-06-01 04:46:57.465437 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-01 04:46:57.465450 | orchestrator | Sunday 01 June 2025 04:46:42 +0000 (0:00:06.124) 0:02:16.121 *********** 2025-06-01 04:46:57.465484 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:46:57.465497 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:46:57.465510 | orchestrator | 2025-06-01 04:46:57.465524 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-01 04:46:57.465533 | orchestrator | Sunday 01 June 2025 04:46:48 +0000 (0:00:06.116) 0:02:22.237 *********** 2025-06-01 04:46:57.465541 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:46:57.465549 | orchestrator | 2025-06-01 04:46:57.465556 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-01 04:46:57.465564 | orchestrator | Sunday 01 June 2025 04:46:48 +0000 (0:00:00.153) 0:02:22.391 *********** 2025-06-01 04:46:57.465572 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.465580 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.465588 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.465596 | orchestrator | 2025-06-01 04:46:57.465603 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-01 04:46:57.465611 | orchestrator | Sunday 01 June 2025 04:46:50 +0000 (0:00:01.293) 0:02:23.685 *********** 2025-06-01 04:46:57.465619 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.465627 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.465635 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.465642 | orchestrator | 2025-06-01 04:46:57.465650 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-01 04:46:57.465658 | orchestrator | Sunday 01 June 2025 04:46:50 +0000 (0:00:00.617) 0:02:24.302 *********** 2025-06-01 04:46:57.465666 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.465674 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.465682 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.465690 | orchestrator | 2025-06-01 04:46:57.465697 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-01 04:46:57.465705 | orchestrator | Sunday 01 June 2025 04:46:51 +0000 (0:00:00.839) 0:02:25.142 *********** 2025-06-01 04:46:57.465713 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:46:57.465721 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:46:57.465729 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:46:57.465737 | orchestrator | 2025-06-01 04:46:57.465744 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-01 04:46:57.465752 | orchestrator | Sunday 01 June 2025 04:46:52 +0000 (0:00:00.600) 0:02:25.742 *********** 2025-06-01 04:46:57.465760 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.465768 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.465776 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.465784 | orchestrator | 2025-06-01 04:46:57.465791 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-01 04:46:57.465804 | orchestrator | Sunday 01 June 2025 04:46:53 +0000 (0:00:01.287) 0:02:27.030 *********** 2025-06-01 04:46:57.465812 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:46:57.465820 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:46:57.465828 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:46:57.465836 | orchestrator | 2025-06-01 04:46:57.465844 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:46:57.465852 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 04:46:57.465867 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-01 04:46:57.465875 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-01 04:46:57.465883 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:46:57.465892 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:46:57.465899 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:46:57.465907 | orchestrator | 2025-06-01 04:46:57.465915 | orchestrator | 2025-06-01 04:46:57.465923 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:46:57.465931 | orchestrator | Sunday 01 June 2025 04:46:54 +0000 (0:00:01.193) 0:02:28.223 *********** 2025-06-01 04:46:57.465939 | orchestrator | =============================================================================== 2025-06-01 04:46:57.465947 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 39.08s 2025-06-01 04:46:57.465954 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.35s 2025-06-01 04:46:57.465962 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.17s 2025-06-01 04:46:57.465970 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.93s 2025-06-01 04:46:57.465978 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.90s 2025-06-01 04:46:57.465985 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.09s 2025-06-01 04:46:57.465993 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.78s 2025-06-01 04:46:57.466006 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.85s 2025-06-01 04:46:57.466062 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.68s 2025-06-01 04:46:57.466074 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.13s 2025-06-01 04:46:57.466084 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.96s 2025-06-01 04:46:57.466093 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.90s 2025-06-01 04:46:57.466102 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.73s 2025-06-01 04:46:57.466111 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.54s 2025-06-01 04:46:57.466120 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.44s 2025-06-01 04:46:57.466129 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.43s 2025-06-01 04:46:57.466138 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2025-06-01 04:46:57.466147 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.29s 2025-06-01 04:46:57.466156 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.29s 2025-06-01 04:46:57.466165 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.19s 2025-06-01 04:46:57.466174 | orchestrator | 2025-06-01 04:46:57 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:46:57.466183 | orchestrator | 2025-06-01 04:46:57 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:46:57.466193 | orchestrator | 2025-06-01 04:46:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:00.523128 | orchestrator | 2025-06-01 04:47:00 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:00.523311 | orchestrator | 2025-06-01 04:47:00 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:00.523362 | orchestrator | 2025-06-01 04:47:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:03.577437 | orchestrator | 2025-06-01 04:47:03 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:03.577658 | orchestrator | 2025-06-01 04:47:03 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:03.577688 | orchestrator | 2025-06-01 04:47:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:06.620419 | orchestrator | 2025-06-01 04:47:06 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:06.620575 | orchestrator | 2025-06-01 04:47:06 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:06.620593 | orchestrator | 2025-06-01 04:47:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:09.673993 | orchestrator | 2025-06-01 04:47:09 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:09.676408 | orchestrator | 2025-06-01 04:47:09 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:09.676619 | orchestrator | 2025-06-01 04:47:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:12.737312 | orchestrator | 2025-06-01 04:47:12 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:12.741141 | orchestrator | 2025-06-01 04:47:12 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:12.741193 | orchestrator | 2025-06-01 04:47:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:15.778285 | orchestrator | 2025-06-01 04:47:15 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:15.782239 | orchestrator | 2025-06-01 04:47:15 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:15.782279 | orchestrator | 2025-06-01 04:47:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:18.824035 | orchestrator | 2025-06-01 04:47:18 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:18.826574 | orchestrator | 2025-06-01 04:47:18 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:18.827091 | orchestrator | 2025-06-01 04:47:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:21.876588 | orchestrator | 2025-06-01 04:47:21 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:21.878894 | orchestrator | 2025-06-01 04:47:21 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:21.881993 | orchestrator | 2025-06-01 04:47:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:24.930648 | orchestrator | 2025-06-01 04:47:24 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:24.931196 | orchestrator | 2025-06-01 04:47:24 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:24.931393 | orchestrator | 2025-06-01 04:47:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:27.984547 | orchestrator | 2025-06-01 04:47:27 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:27.985331 | orchestrator | 2025-06-01 04:47:27 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:27.985363 | orchestrator | 2025-06-01 04:47:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:31.046668 | orchestrator | 2025-06-01 04:47:31 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:31.052272 | orchestrator | 2025-06-01 04:47:31 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:31.052334 | orchestrator | 2025-06-01 04:47:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:34.098559 | orchestrator | 2025-06-01 04:47:34 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:34.099082 | orchestrator | 2025-06-01 04:47:34 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:34.099112 | orchestrator | 2025-06-01 04:47:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:37.147370 | orchestrator | 2025-06-01 04:47:37 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:37.148913 | orchestrator | 2025-06-01 04:47:37 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:37.149201 | orchestrator | 2025-06-01 04:47:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:40.192486 | orchestrator | 2025-06-01 04:47:40 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:40.192574 | orchestrator | 2025-06-01 04:47:40 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:40.192589 | orchestrator | 2025-06-01 04:47:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:43.248354 | orchestrator | 2025-06-01 04:47:43 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:43.248527 | orchestrator | 2025-06-01 04:47:43 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:43.249144 | orchestrator | 2025-06-01 04:47:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:46.296040 | orchestrator | 2025-06-01 04:47:46 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:46.297929 | orchestrator | 2025-06-01 04:47:46 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:46.298245 | orchestrator | 2025-06-01 04:47:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:49.350664 | orchestrator | 2025-06-01 04:47:49 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:49.351842 | orchestrator | 2025-06-01 04:47:49 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:49.351904 | orchestrator | 2025-06-01 04:47:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:52.393837 | orchestrator | 2025-06-01 04:47:52 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:52.394445 | orchestrator | 2025-06-01 04:47:52 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:52.394478 | orchestrator | 2025-06-01 04:47:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:55.443179 | orchestrator | 2025-06-01 04:47:55 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:55.443621 | orchestrator | 2025-06-01 04:47:55 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:55.444575 | orchestrator | 2025-06-01 04:47:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:47:58.492596 | orchestrator | 2025-06-01 04:47:58 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:47:58.494770 | orchestrator | 2025-06-01 04:47:58 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:47:58.494802 | orchestrator | 2025-06-01 04:47:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:01.542712 | orchestrator | 2025-06-01 04:48:01 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:01.545554 | orchestrator | 2025-06-01 04:48:01 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:01.545873 | orchestrator | 2025-06-01 04:48:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:04.590517 | orchestrator | 2025-06-01 04:48:04 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:04.590644 | orchestrator | 2025-06-01 04:48:04 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:04.590668 | orchestrator | 2025-06-01 04:48:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:07.640849 | orchestrator | 2025-06-01 04:48:07 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:07.642870 | orchestrator | 2025-06-01 04:48:07 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:07.642951 | orchestrator | 2025-06-01 04:48:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:10.694198 | orchestrator | 2025-06-01 04:48:10 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:10.697019 | orchestrator | 2025-06-01 04:48:10 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:10.697064 | orchestrator | 2025-06-01 04:48:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:13.749695 | orchestrator | 2025-06-01 04:48:13 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:13.751150 | orchestrator | 2025-06-01 04:48:13 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:13.751213 | orchestrator | 2025-06-01 04:48:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:16.799606 | orchestrator | 2025-06-01 04:48:16 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:16.799707 | orchestrator | 2025-06-01 04:48:16 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:16.799722 | orchestrator | 2025-06-01 04:48:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:19.838731 | orchestrator | 2025-06-01 04:48:19 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:19.839071 | orchestrator | 2025-06-01 04:48:19 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:19.839105 | orchestrator | 2025-06-01 04:48:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:22.889815 | orchestrator | 2025-06-01 04:48:22 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:22.894164 | orchestrator | 2025-06-01 04:48:22 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:22.894210 | orchestrator | 2025-06-01 04:48:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:25.945787 | orchestrator | 2025-06-01 04:48:25 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:25.947304 | orchestrator | 2025-06-01 04:48:25 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:25.947335 | orchestrator | 2025-06-01 04:48:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:28.987809 | orchestrator | 2025-06-01 04:48:28 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:28.989735 | orchestrator | 2025-06-01 04:48:28 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:28.989920 | orchestrator | 2025-06-01 04:48:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:32.034991 | orchestrator | 2025-06-01 04:48:32 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:32.036226 | orchestrator | 2025-06-01 04:48:32 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:32.036261 | orchestrator | 2025-06-01 04:48:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:35.081817 | orchestrator | 2025-06-01 04:48:35 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:35.082165 | orchestrator | 2025-06-01 04:48:35 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:35.082193 | orchestrator | 2025-06-01 04:48:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:38.121889 | orchestrator | 2025-06-01 04:48:38 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:38.123049 | orchestrator | 2025-06-01 04:48:38 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:38.123100 | orchestrator | 2025-06-01 04:48:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:41.170415 | orchestrator | 2025-06-01 04:48:41 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:41.170809 | orchestrator | 2025-06-01 04:48:41 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:41.170840 | orchestrator | 2025-06-01 04:48:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:44.242210 | orchestrator | 2025-06-01 04:48:44 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:44.245126 | orchestrator | 2025-06-01 04:48:44 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:44.245367 | orchestrator | 2025-06-01 04:48:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:47.317893 | orchestrator | 2025-06-01 04:48:47 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:47.317995 | orchestrator | 2025-06-01 04:48:47 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:47.318009 | orchestrator | 2025-06-01 04:48:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:50.372433 | orchestrator | 2025-06-01 04:48:50 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:50.372541 | orchestrator | 2025-06-01 04:48:50 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:50.372556 | orchestrator | 2025-06-01 04:48:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:53.435033 | orchestrator | 2025-06-01 04:48:53 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:53.435141 | orchestrator | 2025-06-01 04:48:53 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:53.435157 | orchestrator | 2025-06-01 04:48:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:56.471551 | orchestrator | 2025-06-01 04:48:56 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:56.473044 | orchestrator | 2025-06-01 04:48:56 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:56.473120 | orchestrator | 2025-06-01 04:48:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:48:59.537066 | orchestrator | 2025-06-01 04:48:59 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:48:59.541014 | orchestrator | 2025-06-01 04:48:59 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:48:59.541089 | orchestrator | 2025-06-01 04:48:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:02.588057 | orchestrator | 2025-06-01 04:49:02 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:02.590136 | orchestrator | 2025-06-01 04:49:02 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:02.590230 | orchestrator | 2025-06-01 04:49:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:05.624387 | orchestrator | 2025-06-01 04:49:05 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:05.625198 | orchestrator | 2025-06-01 04:49:05 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:05.625233 | orchestrator | 2025-06-01 04:49:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:08.677402 | orchestrator | 2025-06-01 04:49:08 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:08.678164 | orchestrator | 2025-06-01 04:49:08 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:08.678488 | orchestrator | 2025-06-01 04:49:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:11.736064 | orchestrator | 2025-06-01 04:49:11 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:11.736761 | orchestrator | 2025-06-01 04:49:11 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:11.736795 | orchestrator | 2025-06-01 04:49:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:14.775043 | orchestrator | 2025-06-01 04:49:14 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:14.777592 | orchestrator | 2025-06-01 04:49:14 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:14.777655 | orchestrator | 2025-06-01 04:49:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:17.821196 | orchestrator | 2025-06-01 04:49:17 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:17.822273 | orchestrator | 2025-06-01 04:49:17 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:17.822390 | orchestrator | 2025-06-01 04:49:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:20.881715 | orchestrator | 2025-06-01 04:49:20 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:20.882524 | orchestrator | 2025-06-01 04:49:20 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:20.884038 | orchestrator | 2025-06-01 04:49:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:23.940484 | orchestrator | 2025-06-01 04:49:23 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:23.941525 | orchestrator | 2025-06-01 04:49:23 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:23.941792 | orchestrator | 2025-06-01 04:49:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:26.984454 | orchestrator | 2025-06-01 04:49:26 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:26.986698 | orchestrator | 2025-06-01 04:49:26 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:26.986743 | orchestrator | 2025-06-01 04:49:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:30.035530 | orchestrator | 2025-06-01 04:49:30 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:30.036529 | orchestrator | 2025-06-01 04:49:30 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:30.036753 | orchestrator | 2025-06-01 04:49:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:33.094093 | orchestrator | 2025-06-01 04:49:33 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:33.096044 | orchestrator | 2025-06-01 04:49:33 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:33.096509 | orchestrator | 2025-06-01 04:49:33 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state STARTED 2025-06-01 04:49:33.097013 | orchestrator | 2025-06-01 04:49:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:36.143655 | orchestrator | 2025-06-01 04:49:36 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:36.145567 | orchestrator | 2025-06-01 04:49:36 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:36.147084 | orchestrator | 2025-06-01 04:49:36 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state STARTED 2025-06-01 04:49:36.147636 | orchestrator | 2025-06-01 04:49:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:39.181882 | orchestrator | 2025-06-01 04:49:39 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:39.184110 | orchestrator | 2025-06-01 04:49:39 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state STARTED 2025-06-01 04:49:39.184160 | orchestrator | 2025-06-01 04:49:39 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state STARTED 2025-06-01 04:49:39.184173 | orchestrator | 2025-06-01 04:49:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:42.227474 | orchestrator | 2025-06-01 04:49:42 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:49:42.227580 | orchestrator | 2025-06-01 04:49:42 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:49:42.227849 | orchestrator | 2025-06-01 04:49:42 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:42.238584 | orchestrator | 2025-06-01 04:49:42 | INFO  | Task 5286601f-cccc-4052-8961-4136d1f41967 is in state SUCCESS 2025-06-01 04:49:42.240963 | orchestrator | 2025-06-01 04:49:42.241004 | orchestrator | 2025-06-01 04:49:42.241016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:49:42.241028 | orchestrator | 2025-06-01 04:49:42.241038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:49:42.241048 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.460) 0:00:00.460 *********** 2025-06-01 04:49:42.241058 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.241071 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.241088 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.241104 | orchestrator | 2025-06-01 04:49:42.241120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:49:42.241137 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.479) 0:00:00.939 *********** 2025-06-01 04:49:42.241153 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-01 04:49:42.241170 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-01 04:49:42.241187 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-01 04:49:42.241204 | orchestrator | 2025-06-01 04:49:42.241222 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-01 04:49:42.241267 | orchestrator | 2025-06-01 04:49:42.241286 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-01 04:49:42.241303 | orchestrator | Sunday 01 June 2025 04:43:23 +0000 (0:00:00.630) 0:00:01.569 *********** 2025-06-01 04:49:42.241319 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.241336 | orchestrator | 2025-06-01 04:49:42.241376 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-01 04:49:42.241393 | orchestrator | Sunday 01 June 2025 04:43:24 +0000 (0:00:01.028) 0:00:02.598 *********** 2025-06-01 04:49:42.241410 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.241426 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.241443 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.241458 | orchestrator | 2025-06-01 04:49:42.241475 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-01 04:49:42.241491 | orchestrator | Sunday 01 June 2025 04:43:25 +0000 (0:00:00.749) 0:00:03.347 *********** 2025-06-01 04:49:42.241507 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.241523 | orchestrator | 2025-06-01 04:49:42.241540 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-01 04:49:42.241556 | orchestrator | Sunday 01 June 2025 04:43:26 +0000 (0:00:00.974) 0:00:04.322 *********** 2025-06-01 04:49:42.241573 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.241591 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.241740 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.241752 | orchestrator | 2025-06-01 04:49:42.241764 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-01 04:49:42.241776 | orchestrator | Sunday 01 June 2025 04:43:26 +0000 (0:00:00.642) 0:00:04.964 *********** 2025-06-01 04:49:42.241788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 04:49:42.241799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 04:49:42.241810 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 04:49:42.241822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 04:49:42.241833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 04:49:42.241845 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 04:49:42.241870 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 04:49:42.241883 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 04:49:42.241894 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 04:49:42.241906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 04:49:42.241916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 04:49:42.241926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 04:49:42.241936 | orchestrator | 2025-06-01 04:49:42.241944 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 04:49:42.241952 | orchestrator | Sunday 01 June 2025 04:43:29 +0000 (0:00:02.539) 0:00:07.507 *********** 2025-06-01 04:49:42.241960 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-01 04:49:42.241968 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-01 04:49:42.241976 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-01 04:49:42.241984 | orchestrator | 2025-06-01 04:49:42.241992 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 04:49:42.242000 | orchestrator | Sunday 01 June 2025 04:43:30 +0000 (0:00:00.859) 0:00:08.367 *********** 2025-06-01 04:49:42.242062 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-01 04:49:42.242088 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-01 04:49:42.242101 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-01 04:49:42.242113 | orchestrator | 2025-06-01 04:49:42.242125 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 04:49:42.242137 | orchestrator | Sunday 01 June 2025 04:43:31 +0000 (0:00:01.407) 0:00:09.774 *********** 2025-06-01 04:49:42.242150 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-01 04:49:42.242162 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.242191 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-01 04:49:42.242206 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.242220 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-01 04:49:42.242234 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.242248 | orchestrator | 2025-06-01 04:49:42.242256 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-01 04:49:42.242264 | orchestrator | Sunday 01 June 2025 04:43:32 +0000 (0:00:01.254) 0:00:11.028 *********** 2025-06-01 04:49:42.242276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.242391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.242399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.242407 | orchestrator | 2025-06-01 04:49:42.242415 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-01 04:49:42.242423 | orchestrator | Sunday 01 June 2025 04:43:34 +0000 (0:00:01.769) 0:00:12.798 *********** 2025-06-01 04:49:42.242431 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.242439 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.242447 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.242455 | orchestrator | 2025-06-01 04:49:42.242462 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-01 04:49:42.242470 | orchestrator | Sunday 01 June 2025 04:43:36 +0000 (0:00:01.468) 0:00:14.267 *********** 2025-06-01 04:49:42.242478 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-01 04:49:42.242486 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-01 04:49:42.242494 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-01 04:49:42.242501 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-01 04:49:42.242509 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-01 04:49:42.242517 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-01 04:49:42.242524 | orchestrator | 2025-06-01 04:49:42.242532 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-01 04:49:42.242540 | orchestrator | Sunday 01 June 2025 04:43:38 +0000 (0:00:02.296) 0:00:16.563 *********** 2025-06-01 04:49:42.242555 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.242563 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.242570 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.242578 | orchestrator | 2025-06-01 04:49:42.242590 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-01 04:49:42.242598 | orchestrator | Sunday 01 June 2025 04:43:40 +0000 (0:00:02.214) 0:00:18.777 *********** 2025-06-01 04:49:42.242606 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.242614 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.242622 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.242630 | orchestrator | 2025-06-01 04:49:42.242637 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-01 04:49:42.242645 | orchestrator | Sunday 01 June 2025 04:43:42 +0000 (0:00:01.612) 0:00:20.390 *********** 2025-06-01 04:49:42.242653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.242708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.242718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.242727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.242736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.242754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 04:49:42.242764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.242772 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.242781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 04:49:42.242789 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.242806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.242815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.242823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.242832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 04:49:42.242845 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.242853 | orchestrator | 2025-06-01 04:49:42.242861 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-01 04:49:42.242869 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:00.975) 0:00:21.365 *********** 2025-06-01 04:49:42.242881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.242935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 04:49:42.242947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.242964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.242978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 04:49:42.242987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.242996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955', '__omit_place_holder__d65ab1fd0c8f165d52d18e0dcb14874a24855955'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 04:49:42.243009 | orchestrator | 2025-06-01 04:49:42.243017 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-01 04:49:42.243025 | orchestrator | Sunday 01 June 2025 04:43:47 +0000 (0:00:03.932) 0:00:25.298 *********** 2025-06-01 04:49:42.243034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.243333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.243341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.243377 | orchestrator | 2025-06-01 04:49:42.243386 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-01 04:49:42.243394 | orchestrator | Sunday 01 June 2025 04:43:50 +0000 (0:00:03.621) 0:00:28.920 *********** 2025-06-01 04:49:42.243402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 04:49:42.243411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 04:49:42.243419 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 04:49:42.243427 | orchestrator | 2025-06-01 04:49:42.243435 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-01 04:49:42.243442 | orchestrator | Sunday 01 June 2025 04:43:53 +0000 (0:00:02.319) 0:00:31.240 *********** 2025-06-01 04:49:42.243450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 04:49:42.243458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 04:49:42.243466 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 04:49:42.243474 | orchestrator | 2025-06-01 04:49:42.243493 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-01 04:49:42.243501 | orchestrator | Sunday 01 June 2025 04:43:56 +0000 (0:00:03.424) 0:00:34.665 *********** 2025-06-01 04:49:42.243509 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.243517 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.243525 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.243533 | orchestrator | 2025-06-01 04:49:42.243546 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-01 04:49:42.243554 | orchestrator | Sunday 01 June 2025 04:43:57 +0000 (0:00:00.514) 0:00:35.180 *********** 2025-06-01 04:49:42.243562 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 04:49:42.243571 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 04:49:42.243579 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 04:49:42.243587 | orchestrator | 2025-06-01 04:49:42.243595 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-01 04:49:42.243603 | orchestrator | Sunday 01 June 2025 04:43:59 +0000 (0:00:02.053) 0:00:37.233 *********** 2025-06-01 04:49:42.243611 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 04:49:42.243619 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 04:49:42.243627 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 04:49:42.243635 | orchestrator | 2025-06-01 04:49:42.243643 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-01 04:49:42.243651 | orchestrator | Sunday 01 June 2025 04:44:00 +0000 (0:00:01.518) 0:00:38.751 *********** 2025-06-01 04:49:42.243659 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-01 04:49:42.243667 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-01 04:49:42.243675 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-01 04:49:42.243682 | orchestrator | 2025-06-01 04:49:42.243690 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-01 04:49:42.243698 | orchestrator | Sunday 01 June 2025 04:44:02 +0000 (0:00:02.197) 0:00:40.949 *********** 2025-06-01 04:49:42.243706 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-01 04:49:42.243714 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-01 04:49:42.243722 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-01 04:49:42.243730 | orchestrator | 2025-06-01 04:49:42.243738 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-01 04:49:42.243745 | orchestrator | Sunday 01 June 2025 04:44:04 +0000 (0:00:01.509) 0:00:42.459 *********** 2025-06-01 04:49:42.243753 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.243761 | orchestrator | 2025-06-01 04:49:42.243769 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-01 04:49:42.243777 | orchestrator | Sunday 01 June 2025 04:44:05 +0000 (0:00:00.728) 0:00:43.188 *********** 2025-06-01 04:49:42.243789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.243850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.243859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.243867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.243880 | orchestrator | 2025-06-01 04:49:42.243888 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-01 04:49:42.243896 | orchestrator | Sunday 01 June 2025 04:44:08 +0000 (0:00:03.676) 0:00:46.864 *********** 2025-06-01 04:49:42.243911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.243919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.243928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.243936 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.243972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.243985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.243993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244006 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.244015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244046 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.244054 | orchestrator | 2025-06-01 04:49:42.244062 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-01 04:49:42.244070 | orchestrator | Sunday 01 June 2025 04:44:09 +0000 (0:00:00.784) 0:00:47.649 *********** 2025-06-01 04:49:42.244078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244112 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.244120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244150 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.244158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244191 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.244199 | orchestrator | 2025-06-01 04:49:42.244207 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-01 04:49:42.244215 | orchestrator | Sunday 01 June 2025 04:44:11 +0000 (0:00:01.598) 0:00:49.248 *********** 2025-06-01 04:49:42.244224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244254 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.244262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244292 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.244307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244338 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.244506 | orchestrator | 2025-06-01 04:49:42.244518 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-01 04:49:42.244526 | orchestrator | Sunday 01 June 2025 04:44:11 +0000 (0:00:00.635) 0:00:49.883 *********** 2025-06-01 04:49:42.244534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244567 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.244580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.244621 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.244629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.244638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.244651 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.244659 | orchestrator | 2025-06-01 04:49:42.244667 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-01 04:49:42.244675 | orchestrator | Sunday 01 June 2025 04:44:12 +0000 (0:00:00.667) 0:00:50.551 *********** 2025-06-01 04:49:42.245078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245118 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.245126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245157 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.245166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245201 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.245209 | orchestrator | 2025-06-01 04:49:42.245216 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-01 04:49:42.245225 | orchestrator | Sunday 01 June 2025 04:44:13 +0000 (0:00:01.309) 0:00:51.860 *********** 2025-06-01 04:49:42.245233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245263 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.245271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245304 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.245312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245336 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.245394 | orchestrator | 2025-06-01 04:49:42.245404 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-01 04:49:42.245418 | orchestrator | Sunday 01 June 2025 04:44:14 +0000 (0:00:01.139) 0:00:53.000 *********** 2025-06-01 04:49:42.245564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245597 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.245610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245635 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.245643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245674 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.245683 | orchestrator | 2025-06-01 04:49:42.245697 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-01 04:49:42.245706 | orchestrator | Sunday 01 June 2025 04:44:16 +0000 (0:00:01.269) 0:00:54.269 *********** 2025-06-01 04:49:42.245720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245749 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.245763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245791 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.245806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 04:49:42.245820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 04:49:42.245829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 04:49:42.245839 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.245848 | orchestrator | 2025-06-01 04:49:42.245857 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-01 04:49:42.245866 | orchestrator | Sunday 01 June 2025 04:44:17 +0000 (0:00:00.959) 0:00:55.229 *********** 2025-06-01 04:49:42.245884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 04:49:42.245894 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 04:49:42.245903 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 04:49:42.245913 | orchestrator | 2025-06-01 04:49:42.245922 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-01 04:49:42.245932 | orchestrator | Sunday 01 June 2025 04:44:18 +0000 (0:00:01.382) 0:00:56.612 *********** 2025-06-01 04:49:42.245941 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 04:49:42.245950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 04:49:42.245960 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 04:49:42.246001 | orchestrator | 2025-06-01 04:49:42.246011 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-01 04:49:42.246061 | orchestrator | Sunday 01 June 2025 04:44:19 +0000 (0:00:01.305) 0:00:57.917 *********** 2025-06-01 04:49:42.246070 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 04:49:42.246078 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 04:49:42.246086 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 04:49:42.246094 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 04:49:42.246102 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 04:49:42.246110 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.246118 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.246126 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 04:49:42.246134 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.246142 | orchestrator | 2025-06-01 04:49:42.246150 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-01 04:49:42.246158 | orchestrator | Sunday 01 June 2025 04:44:21 +0000 (0:00:01.257) 0:00:59.175 *********** 2025-06-01 04:49:42.246172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.246186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.246195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 04:49:42.246209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.246218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.246226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 04:49:42.246234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.246249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.246261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 04:49:42.246274 | orchestrator | 2025-06-01 04:49:42.246282 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-01 04:49:42.246290 | orchestrator | Sunday 01 June 2025 04:44:23 +0000 (0:00:02.691) 0:01:01.867 *********** 2025-06-01 04:49:42.246298 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.246306 | orchestrator | 2025-06-01 04:49:42.246315 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-01 04:49:42.246323 | orchestrator | Sunday 01 June 2025 04:44:24 +0000 (0:00:00.742) 0:01:02.609 *********** 2025-06-01 04:49:42.246332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 04:49:42.246342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.246372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 04:49:42.246414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.246423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 04:49:42.246546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.246562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246589 | orchestrator | 2025-06-01 04:49:42.246597 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-01 04:49:42.246605 | orchestrator | Sunday 01 June 2025 04:44:28 +0000 (0:00:04.396) 0:01:07.006 *********** 2025-06-01 04:49:42.246614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 04:49:42.246622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.246630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246647 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.246659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 04:49:42.246677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.246686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246703 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.246711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 04:49:42.246720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.246728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.246758 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.246767 | orchestrator | 2025-06-01 04:49:42.246775 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-01 04:49:42.246783 | orchestrator | Sunday 01 June 2025 04:44:29 +0000 (0:00:00.883) 0:01:07.889 *********** 2025-06-01 04:49:42.246791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 04:49:42.246800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 04:49:42.246810 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.246818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 04:49:42.246826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 04:49:42.246834 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.246842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 04:49:42.246850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 04:49:42.246858 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.246866 | orchestrator | 2025-06-01 04:49:42.246874 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-01 04:49:42.246882 | orchestrator | Sunday 01 June 2025 04:44:31 +0000 (0:00:01.364) 0:01:09.253 *********** 2025-06-01 04:49:42.246890 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.246898 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.246906 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.246914 | orchestrator | 2025-06-01 04:49:42.246922 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-01 04:49:42.246929 | orchestrator | Sunday 01 June 2025 04:44:32 +0000 (0:00:01.226) 0:01:10.480 *********** 2025-06-01 04:49:42.246937 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.246945 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.246953 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.246961 | orchestrator | 2025-06-01 04:49:42.246969 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-01 04:49:42.246976 | orchestrator | Sunday 01 June 2025 04:44:34 +0000 (0:00:01.866) 0:01:12.347 *********** 2025-06-01 04:49:42.246984 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.246992 | orchestrator | 2025-06-01 04:49:42.247000 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-01 04:49:42.247008 | orchestrator | Sunday 01 June 2025 04:44:34 +0000 (0:00:00.637) 0:01:12.984 *********** 2025-06-01 04:49:42.247022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.247043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.247100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.247140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247157 | orchestrator | 2025-06-01 04:49:42.247165 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-01 04:49:42.247173 | orchestrator | Sunday 01 June 2025 04:44:39 +0000 (0:00:04.370) 0:01:17.355 *********** 2025-06-01 04:49:42.247181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.247206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247228 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.247241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.247253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247269 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.247278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.247292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.247308 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.247316 | orchestrator | 2025-06-01 04:49:42.247324 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-01 04:49:42.247332 | orchestrator | Sunday 01 June 2025 04:44:39 +0000 (0:00:00.468) 0:01:17.823 *********** 2025-06-01 04:49:42.247361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 04:49:42.247371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 04:49:42.247380 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.247483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 04:49:42.247494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 04:49:42.247503 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.247511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 04:49:42.247519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 04:49:42.247527 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.247534 | orchestrator | 2025-06-01 04:49:42.247542 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-01 04:49:42.247550 | orchestrator | Sunday 01 June 2025 04:44:40 +0000 (0:00:00.804) 0:01:18.628 *********** 2025-06-01 04:49:42.247558 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.247565 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.247579 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.247587 | orchestrator | 2025-06-01 04:49:42.247595 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-01 04:49:42.247603 | orchestrator | Sunday 01 June 2025 04:44:42 +0000 (0:00:02.393) 0:01:21.022 *********** 2025-06-01 04:49:42.247611 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.247619 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.247626 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.247634 | orchestrator | 2025-06-01 04:49:42.247642 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-01 04:49:42.247650 | orchestrator | Sunday 01 June 2025 04:44:44 +0000 (0:00:01.892) 0:01:22.915 *********** 2025-06-01 04:49:42.247657 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.247665 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.247673 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.247681 | orchestrator | 2025-06-01 04:49:42.247689 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-01 04:49:42.247697 | orchestrator | Sunday 01 June 2025 04:44:45 +0000 (0:00:00.322) 0:01:23.237 *********** 2025-06-01 04:49:42.247704 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.247712 | orchestrator | 2025-06-01 04:49:42.247720 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-01 04:49:42.247728 | orchestrator | Sunday 01 June 2025 04:44:45 +0000 (0:00:00.726) 0:01:23.963 *********** 2025-06-01 04:49:42.247736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 04:49:42.247751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 04:49:42.247764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 04:49:42.247777 | orchestrator | 2025-06-01 04:49:42.247785 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-01 04:49:42.247793 | orchestrator | Sunday 01 June 2025 04:44:49 +0000 (0:00:04.100) 0:01:28.064 *********** 2025-06-01 04:49:42.247801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 04:49:42.247809 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.247817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 04:49:42.247825 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.247834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 04:49:42.247842 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.247850 | orchestrator | 2025-06-01 04:49:42.247857 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-01 04:49:42.247865 | orchestrator | Sunday 01 June 2025 04:44:51 +0000 (0:00:02.035) 0:01:30.100 *********** 2025-06-01 04:49:42.247878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 04:49:42.247891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 04:49:42.247905 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.247914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 04:49:42.247922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 04:49:42.247930 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.247938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 04:49:42.247946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 04:49:42.247954 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.247962 | orchestrator | 2025-06-01 04:49:42.247970 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-01 04:49:42.247978 | orchestrator | Sunday 01 June 2025 04:44:53 +0000 (0:00:01.704) 0:01:31.804 *********** 2025-06-01 04:49:42.247985 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.247993 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.248001 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.248009 | orchestrator | 2025-06-01 04:49:42.248017 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-01 04:49:42.248024 | orchestrator | Sunday 01 June 2025 04:44:54 +0000 (0:00:00.699) 0:01:32.503 *********** 2025-06-01 04:49:42.248032 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.248040 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.248048 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.248055 | orchestrator | 2025-06-01 04:49:42.248063 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-01 04:49:42.248071 | orchestrator | Sunday 01 June 2025 04:44:55 +0000 (0:00:01.216) 0:01:33.720 *********** 2025-06-01 04:49:42.248079 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.248086 | orchestrator | 2025-06-01 04:49:42.248094 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-01 04:49:42.248102 | orchestrator | Sunday 01 June 2025 04:44:56 +0000 (0:00:00.657) 0:01:34.378 *********** 2025-06-01 04:49:42.248115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.248132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.248167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.248239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248278 | orchestrator | 2025-06-01 04:49:42.248296 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-01 04:49:42.248308 | orchestrator | Sunday 01 June 2025 04:45:00 +0000 (0:00:04.314) 0:01:38.692 *********** 2025-06-01 04:49:42.248316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.248325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248492 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.248508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.248542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248569 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.248577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.248591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.248627 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.248635 | orchestrator | 2025-06-01 04:49:42.248643 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-01 04:49:42.248651 | orchestrator | Sunday 01 June 2025 04:45:01 +0000 (0:00:01.223) 0:01:39.916 *********** 2025-06-01 04:49:42.248659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 04:49:42.248668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 04:49:42.248676 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.248684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 04:49:42.248692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 04:49:42.248700 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.248708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 04:49:42.248721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 04:49:42.248729 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.248737 | orchestrator | 2025-06-01 04:49:42.248745 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-01 04:49:42.248753 | orchestrator | Sunday 01 June 2025 04:45:02 +0000 (0:00:00.949) 0:01:40.865 *********** 2025-06-01 04:49:42.248760 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.248768 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.248776 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.248784 | orchestrator | 2025-06-01 04:49:42.248792 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-01 04:49:42.248800 | orchestrator | Sunday 01 June 2025 04:45:03 +0000 (0:00:01.184) 0:01:42.050 *********** 2025-06-01 04:49:42.248808 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.248815 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.248823 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.248831 | orchestrator | 2025-06-01 04:49:42.248839 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-01 04:49:42.248847 | orchestrator | Sunday 01 June 2025 04:45:06 +0000 (0:00:02.212) 0:01:44.262 *********** 2025-06-01 04:49:42.248854 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.248862 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.248870 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.248878 | orchestrator | 2025-06-01 04:49:42.248886 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-01 04:49:42.248893 | orchestrator | Sunday 01 June 2025 04:45:06 +0000 (0:00:00.546) 0:01:44.809 *********** 2025-06-01 04:49:42.248906 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.248914 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.248922 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.248930 | orchestrator | 2025-06-01 04:49:42.248937 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-01 04:49:42.248945 | orchestrator | Sunday 01 June 2025 04:45:06 +0000 (0:00:00.267) 0:01:45.076 *********** 2025-06-01 04:49:42.248953 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.248961 | orchestrator | 2025-06-01 04:49:42.248969 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-01 04:49:42.249025 | orchestrator | Sunday 01 June 2025 04:45:07 +0000 (0:00:00.816) 0:01:45.893 *********** 2025-06-01 04:49:42.249036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 04:49:42.249045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 04:49:42.249067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 04:49:42.249166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 04:49:42.249175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 04:49:42.249307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 04:49:42.249316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249390 | orchestrator | 2025-06-01 04:49:42.249398 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-01 04:49:42.249406 | orchestrator | Sunday 01 June 2025 04:45:12 +0000 (0:00:04.440) 0:01:50.334 *********** 2025-06-01 04:49:42.249414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 04:49:42.249422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 04:49:42.249431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 04:49:42.249482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 04:49:42.249498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.249506 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.250204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250444 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.250459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 04:49:42.250484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 04:49:42.250504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.250571 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.250582 | orchestrator | 2025-06-01 04:49:42.250594 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-01 04:49:42.250607 | orchestrator | Sunday 01 June 2025 04:45:12 +0000 (0:00:00.797) 0:01:51.132 *********** 2025-06-01 04:49:42.250619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 04:49:42.250630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 04:49:42.250642 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.250653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 04:49:42.250664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 04:49:42.250674 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.250694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 04:49:42.250713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 04:49:42.250724 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.250735 | orchestrator | 2025-06-01 04:49:42.250751 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-01 04:49:42.250762 | orchestrator | Sunday 01 June 2025 04:45:13 +0000 (0:00:00.968) 0:01:52.100 *********** 2025-06-01 04:49:42.250773 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.250784 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.250794 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.250805 | orchestrator | 2025-06-01 04:49:42.250815 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-01 04:49:42.250826 | orchestrator | Sunday 01 June 2025 04:45:15 +0000 (0:00:01.810) 0:01:53.911 *********** 2025-06-01 04:49:42.250837 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.250848 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.250858 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.250869 | orchestrator | 2025-06-01 04:49:42.250879 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-01 04:49:42.250890 | orchestrator | Sunday 01 June 2025 04:45:17 +0000 (0:00:01.907) 0:01:55.818 *********** 2025-06-01 04:49:42.250901 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.250919 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.250936 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.250964 | orchestrator | 2025-06-01 04:49:42.250983 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-01 04:49:42.250998 | orchestrator | Sunday 01 June 2025 04:45:17 +0000 (0:00:00.294) 0:01:56.113 *********** 2025-06-01 04:49:42.251014 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.251031 | orchestrator | 2025-06-01 04:49:42.251048 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-01 04:49:42.251064 | orchestrator | Sunday 01 June 2025 04:45:18 +0000 (0:00:00.738) 0:01:56.852 *********** 2025-06-01 04:49:42.251087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:49:42.251136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.251161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:49:42.251186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.251206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:49:42.251225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.251246 | orchestrator | 2025-06-01 04:49:42.251257 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-01 04:49:42.251268 | orchestrator | Sunday 01 June 2025 04:45:22 +0000 (0:00:04.130) 0:02:00.982 *********** 2025-06-01 04:49:42.251285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:49:42.251299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.251317 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.251338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:49:42.251408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.251437 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.251463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:49:42.251476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.251496 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.251508 | orchestrator | 2025-06-01 04:49:42.251518 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-01 04:49:42.251529 | orchestrator | Sunday 01 June 2025 04:45:25 +0000 (0:00:02.719) 0:02:03.702 *********** 2025-06-01 04:49:42.251541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 04:49:42.251607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 04:49:42.251621 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.251637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 04:49:42.251649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 04:49:42.251661 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.251672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 04:49:42.251684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 04:49:42.251695 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.251706 | orchestrator | 2025-06-01 04:49:42.251717 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-01 04:49:42.251728 | orchestrator | Sunday 01 June 2025 04:45:28 +0000 (0:00:03.085) 0:02:06.787 *********** 2025-06-01 04:49:42.251749 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.251760 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.251770 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.251781 | orchestrator | 2025-06-01 04:49:42.251792 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-01 04:49:42.251803 | orchestrator | Sunday 01 June 2025 04:45:30 +0000 (0:00:01.674) 0:02:08.461 *********** 2025-06-01 04:49:42.251814 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.251825 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.251836 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.251846 | orchestrator | 2025-06-01 04:49:42.251857 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-01 04:49:42.251868 | orchestrator | Sunday 01 June 2025 04:45:32 +0000 (0:00:02.023) 0:02:10.485 *********** 2025-06-01 04:49:42.251879 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.251889 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.251900 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.251911 | orchestrator | 2025-06-01 04:49:42.251922 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-01 04:49:42.251933 | orchestrator | Sunday 01 June 2025 04:45:32 +0000 (0:00:00.294) 0:02:10.779 *********** 2025-06-01 04:49:42.251943 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.251954 | orchestrator | 2025-06-01 04:49:42.251965 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-01 04:49:42.251975 | orchestrator | Sunday 01 June 2025 04:45:33 +0000 (0:00:00.797) 0:02:11.577 *********** 2025-06-01 04:49:42.251993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 04:49:42.252012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 04:49:42.252024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 04:49:42.252035 | orchestrator | 2025-06-01 04:49:42.252046 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-01 04:49:42.252057 | orchestrator | Sunday 01 June 2025 04:45:36 +0000 (0:00:03.126) 0:02:14.703 *********** 2025-06-01 04:49:42.252076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 04:49:42.252088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 04:49:42.252099 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.252110 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.252122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 04:49:42.252133 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.252144 | orchestrator | 2025-06-01 04:49:42.252155 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-01 04:49:42.252166 | orchestrator | Sunday 01 June 2025 04:45:36 +0000 (0:00:00.364) 0:02:15.068 *********** 2025-06-01 04:49:42.252181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 04:49:42.252193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 04:49:42.252204 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.252220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 04:49:42.252231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 04:49:42.252242 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.252253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 04:49:42.252264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 04:49:42.252281 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.252292 | orchestrator | 2025-06-01 04:49:42.252303 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-01 04:49:42.252314 | orchestrator | Sunday 01 June 2025 04:45:37 +0000 (0:00:00.635) 0:02:15.704 *********** 2025-06-01 04:49:42.252324 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.252335 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.252375 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.252396 | orchestrator | 2025-06-01 04:49:42.252407 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-01 04:49:42.252418 | orchestrator | Sunday 01 June 2025 04:45:39 +0000 (0:00:01.563) 0:02:17.267 *********** 2025-06-01 04:49:42.252429 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.252440 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.252450 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.252461 | orchestrator | 2025-06-01 04:49:42.252472 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-01 04:49:42.252482 | orchestrator | Sunday 01 June 2025 04:45:41 +0000 (0:00:02.002) 0:02:19.270 *********** 2025-06-01 04:49:42.252493 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.252504 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.252515 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.252526 | orchestrator | 2025-06-01 04:49:42.252537 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-01 04:49:42.252547 | orchestrator | Sunday 01 June 2025 04:45:41 +0000 (0:00:00.325) 0:02:19.596 *********** 2025-06-01 04:49:42.252558 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.252569 | orchestrator | 2025-06-01 04:49:42.252579 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-01 04:49:42.252590 | orchestrator | Sunday 01 June 2025 04:45:42 +0000 (0:00:00.854) 0:02:20.451 *********** 2025-06-01 04:49:42.252617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:49:42.252638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:49:42.252665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:49:42.252685 | orchestrator | 2025-06-01 04:49:42.252696 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-01 04:49:42.252707 | orchestrator | Sunday 01 June 2025 04:45:46 +0000 (0:00:04.633) 0:02:25.084 *********** 2025-06-01 04:49:42.252719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:49:42.252731 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.252756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:49:42.252775 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.252787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:49:42.252800 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.252810 | orchestrator | 2025-06-01 04:49:42.252821 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-01 04:49:42.252832 | orchestrator | Sunday 01 June 2025 04:45:47 +0000 (0:00:00.848) 0:02:25.933 *********** 2025-06-01 04:49:42.252843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 04:49:42.252875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 04:49:42.252899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 04:49:42.252916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 04:49:42.252928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 04:49:42.252939 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.252951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 04:49:42.252962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 04:49:42.252974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 04:49:42.252985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 04:49:42.252996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 04:49:42.253008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 04:49:42.253019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 04:49:42.253031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 04:49:42.253048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 04:49:42.253059 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.253070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 04:49:42.253081 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.253092 | orchestrator | 2025-06-01 04:49:42.253109 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-01 04:49:42.253120 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:01.102) 0:02:27.036 *********** 2025-06-01 04:49:42.253131 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.253142 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.253152 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.253163 | orchestrator | 2025-06-01 04:49:42.253174 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-01 04:49:42.253184 | orchestrator | Sunday 01 June 2025 04:45:50 +0000 (0:00:01.842) 0:02:28.878 *********** 2025-06-01 04:49:42.253195 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.253206 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.253221 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.253232 | orchestrator | 2025-06-01 04:49:42.253243 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-01 04:49:42.253254 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:02.183) 0:02:31.062 *********** 2025-06-01 04:49:42.253265 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.253275 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.253286 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.253297 | orchestrator | 2025-06-01 04:49:42.253307 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-01 04:49:42.253318 | orchestrator | Sunday 01 June 2025 04:45:53 +0000 (0:00:00.364) 0:02:31.426 *********** 2025-06-01 04:49:42.253329 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.253339 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.253401 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.253413 | orchestrator | 2025-06-01 04:49:42.253423 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-01 04:49:42.253434 | orchestrator | Sunday 01 June 2025 04:45:53 +0000 (0:00:00.380) 0:02:31.807 *********** 2025-06-01 04:49:42.253445 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.253456 | orchestrator | 2025-06-01 04:49:42.253466 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-01 04:49:42.253477 | orchestrator | Sunday 01 June 2025 04:45:54 +0000 (0:00:01.323) 0:02:33.130 *********** 2025-06-01 04:49:42.253489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:49:42.253502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:49:42.253521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:49:42.253545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:49:42.253558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:49:42.253570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:49:42.253582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:49:42.253599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:49:42.253611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:49:42.253622 | orchestrator | 2025-06-01 04:49:42.253639 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-01 04:49:42.253651 | orchestrator | Sunday 01 June 2025 04:45:59 +0000 (0:00:04.041) 0:02:37.172 *********** 2025-06-01 04:49:42.253667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:49:42.253679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:49:42.253691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:49:42.253709 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.253721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:49:42.253732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:49:42.253755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:49:42.253767 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.253778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:49:42.253790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:49:42.253807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:49:42.253818 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.253829 | orchestrator | 2025-06-01 04:49:42.253840 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-01 04:49:42.253851 | orchestrator | Sunday 01 June 2025 04:45:59 +0000 (0:00:00.528) 0:02:37.701 *********** 2025-06-01 04:49:42.253863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 04:49:42.253890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 04:49:42.253901 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.253912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 04:49:42.253940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 04:49:42.253952 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.253963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 04:49:42.253979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 04:49:42.253990 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.254001 | orchestrator | 2025-06-01 04:49:42.254012 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-01 04:49:42.254059 | orchestrator | Sunday 01 June 2025 04:46:00 +0000 (0:00:00.890) 0:02:38.591 *********** 2025-06-01 04:49:42.254070 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.254081 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.254092 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.254102 | orchestrator | 2025-06-01 04:49:42.254113 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-01 04:49:42.254124 | orchestrator | Sunday 01 June 2025 04:46:01 +0000 (0:00:01.244) 0:02:39.835 *********** 2025-06-01 04:49:42.254135 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.254146 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.254157 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.254168 | orchestrator | 2025-06-01 04:49:42.254186 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-01 04:49:42.254197 | orchestrator | Sunday 01 June 2025 04:46:03 +0000 (0:00:02.047) 0:02:41.883 *********** 2025-06-01 04:49:42.254207 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.254218 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.254228 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.254240 | orchestrator | 2025-06-01 04:49:42.254250 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-01 04:49:42.254261 | orchestrator | Sunday 01 June 2025 04:46:04 +0000 (0:00:00.307) 0:02:42.191 *********** 2025-06-01 04:49:42.254272 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.254282 | orchestrator | 2025-06-01 04:49:42.254293 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-01 04:49:42.254304 | orchestrator | Sunday 01 June 2025 04:46:05 +0000 (0:00:01.154) 0:02:43.346 *********** 2025-06-01 04:49:42.254316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 04:49:42.254328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 04:49:42.254393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 04:49:42.254412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254436 | orchestrator | 2025-06-01 04:49:42.254447 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-01 04:49:42.254458 | orchestrator | Sunday 01 June 2025 04:46:08 +0000 (0:00:03.361) 0:02:46.707 *********** 2025-06-01 04:49:42.254470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 04:49:42.254488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254500 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.254520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 04:49:42.254538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 04:49:42.254549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254572 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.254583 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.254594 | orchestrator | 2025-06-01 04:49:42.254604 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-01 04:49:42.254615 | orchestrator | Sunday 01 June 2025 04:46:09 +0000 (0:00:00.737) 0:02:47.445 *********** 2025-06-01 04:49:42.254631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 04:49:42.254643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 04:49:42.254664 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.254680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 04:49:42.254691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 04:49:42.254702 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.254713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 04:49:42.254724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 04:49:42.254735 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.254746 | orchestrator | 2025-06-01 04:49:42.254757 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-01 04:49:42.254768 | orchestrator | Sunday 01 June 2025 04:46:10 +0000 (0:00:01.461) 0:02:48.906 *********** 2025-06-01 04:49:42.254778 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.254789 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.254800 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.254810 | orchestrator | 2025-06-01 04:49:42.254821 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-01 04:49:42.254832 | orchestrator | Sunday 01 June 2025 04:46:12 +0000 (0:00:01.341) 0:02:50.248 *********** 2025-06-01 04:49:42.254843 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.254853 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.254864 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.254875 | orchestrator | 2025-06-01 04:49:42.254885 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-01 04:49:42.254896 | orchestrator | Sunday 01 June 2025 04:46:14 +0000 (0:00:02.045) 0:02:52.293 *********** 2025-06-01 04:49:42.254907 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.254918 | orchestrator | 2025-06-01 04:49:42.254929 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-01 04:49:42.254939 | orchestrator | Sunday 01 June 2025 04:46:15 +0000 (0:00:01.087) 0:02:53.381 *********** 2025-06-01 04:49:42.254951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 04:49:42.254963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.254986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 04:49:42.255026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 04:49:42.255049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255122 | orchestrator | 2025-06-01 04:49:42.255133 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-01 04:49:42.255144 | orchestrator | Sunday 01 June 2025 04:46:19 +0000 (0:00:04.049) 0:02:57.430 *********** 2025-06-01 04:49:42.255155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 04:49:42.255167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255218 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.255230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 04:49:42.255241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255281 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.255298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 04:49:42.255314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.255366 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.255377 | orchestrator | 2025-06-01 04:49:42.255388 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-01 04:49:42.255399 | orchestrator | Sunday 01 June 2025 04:46:19 +0000 (0:00:00.691) 0:02:58.122 *********** 2025-06-01 04:49:42.255410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 04:49:42.255421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 04:49:42.255438 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.255449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 04:49:42.255460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 04:49:42.255471 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.255482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 04:49:42.255493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 04:49:42.255504 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.255515 | orchestrator | 2025-06-01 04:49:42.255525 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-01 04:49:42.255536 | orchestrator | Sunday 01 June 2025 04:46:20 +0000 (0:00:00.990) 0:02:59.113 *********** 2025-06-01 04:49:42.255547 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.255558 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.255569 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.255579 | orchestrator | 2025-06-01 04:49:42.255590 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-01 04:49:42.255601 | orchestrator | Sunday 01 June 2025 04:46:22 +0000 (0:00:01.730) 0:03:00.844 *********** 2025-06-01 04:49:42.255617 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.255629 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.255639 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.255650 | orchestrator | 2025-06-01 04:49:42.255661 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-01 04:49:42.255671 | orchestrator | Sunday 01 June 2025 04:46:24 +0000 (0:00:02.084) 0:03:02.928 *********** 2025-06-01 04:49:42.255682 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.255693 | orchestrator | 2025-06-01 04:49:42.255704 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-01 04:49:42.255719 | orchestrator | Sunday 01 June 2025 04:46:25 +0000 (0:00:01.102) 0:03:04.030 *********** 2025-06-01 04:49:42.255730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:49:42.255741 | orchestrator | 2025-06-01 04:49:42.255752 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-01 04:49:42.255763 | orchestrator | Sunday 01 June 2025 04:46:28 +0000 (0:00:02.817) 0:03:06.848 *********** 2025-06-01 04:49:42.255775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:49:42.255793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 04:49:42.255805 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.255830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:49:42.255843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 04:49:42.255855 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.255874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:49:42.255886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 04:49:42.255898 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.255909 | orchestrator | 2025-06-01 04:49:42.255919 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-01 04:49:42.255935 | orchestrator | Sunday 01 June 2025 04:46:31 +0000 (0:00:02.682) 0:03:09.531 *********** 2025-06-01 04:49:42.255952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:49:42.255971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:49:42.255990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 04:49:42.256003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 04:49:42.256014 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256026 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:49:42.256076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 04:49:42.256087 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256098 | orchestrator | 2025-06-01 04:49:42.256109 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-01 04:49:42.256120 | orchestrator | Sunday 01 June 2025 04:46:33 +0000 (0:00:02.001) 0:03:11.532 *********** 2025-06-01 04:49:42.256137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 04:49:42.256154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 04:49:42.256165 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 04:49:42.256194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 04:49:42.256206 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 04:49:42.256229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 04:49:42.256240 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256251 | orchestrator | 2025-06-01 04:49:42.256262 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-01 04:49:42.256273 | orchestrator | Sunday 01 June 2025 04:46:35 +0000 (0:00:02.405) 0:03:13.937 *********** 2025-06-01 04:49:42.256284 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.256294 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.256305 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.256316 | orchestrator | 2025-06-01 04:49:42.256327 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-01 04:49:42.256337 | orchestrator | Sunday 01 June 2025 04:46:37 +0000 (0:00:02.077) 0:03:16.014 *********** 2025-06-01 04:49:42.256398 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256411 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256422 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256433 | orchestrator | 2025-06-01 04:49:42.256442 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-01 04:49:42.256452 | orchestrator | Sunday 01 June 2025 04:46:39 +0000 (0:00:01.668) 0:03:17.682 *********** 2025-06-01 04:49:42.256461 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256471 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256481 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256490 | orchestrator | 2025-06-01 04:49:42.256500 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-01 04:49:42.256510 | orchestrator | Sunday 01 June 2025 04:46:39 +0000 (0:00:00.325) 0:03:18.007 *********** 2025-06-01 04:49:42.256525 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.256542 | orchestrator | 2025-06-01 04:49:42.256552 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-01 04:49:42.256561 | orchestrator | Sunday 01 June 2025 04:46:40 +0000 (0:00:01.097) 0:03:19.105 *********** 2025-06-01 04:49:42.256576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 04:49:42.256588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 04:49:42.256598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 04:49:42.256608 | orchestrator | 2025-06-01 04:49:42.256618 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-01 04:49:42.256627 | orchestrator | Sunday 01 June 2025 04:46:42 +0000 (0:00:01.780) 0:03:20.885 *********** 2025-06-01 04:49:42.256637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 04:49:42.256647 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 04:49:42.256679 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 04:49:42.256703 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256713 | orchestrator | 2025-06-01 04:49:42.256722 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-01 04:49:42.256732 | orchestrator | Sunday 01 June 2025 04:46:43 +0000 (0:00:00.423) 0:03:21.309 *********** 2025-06-01 04:49:42.256742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 04:49:42.256752 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 04:49:42.256772 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 04:49:42.256792 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256802 | orchestrator | 2025-06-01 04:49:42.256811 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-01 04:49:42.256821 | orchestrator | Sunday 01 June 2025 04:46:43 +0000 (0:00:00.590) 0:03:21.899 *********** 2025-06-01 04:49:42.256830 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256840 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256849 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256859 | orchestrator | 2025-06-01 04:49:42.256868 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-01 04:49:42.256878 | orchestrator | Sunday 01 June 2025 04:46:44 +0000 (0:00:00.744) 0:03:22.644 *********** 2025-06-01 04:49:42.256887 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256897 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256906 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256916 | orchestrator | 2025-06-01 04:49:42.256926 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-01 04:49:42.256935 | orchestrator | Sunday 01 June 2025 04:46:45 +0000 (0:00:01.477) 0:03:24.121 *********** 2025-06-01 04:49:42.256945 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.256954 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.256964 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.256973 | orchestrator | 2025-06-01 04:49:42.256983 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-01 04:49:42.256998 | orchestrator | Sunday 01 June 2025 04:46:46 +0000 (0:00:00.358) 0:03:24.480 *********** 2025-06-01 04:49:42.257007 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.257017 | orchestrator | 2025-06-01 04:49:42.257027 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-01 04:49:42.257036 | orchestrator | Sunday 01 June 2025 04:46:47 +0000 (0:00:01.424) 0:03:25.904 *********** 2025-06-01 04:49:42.257051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 04:49:42.257066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 04:49:42.257113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.257178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 04:49:42.257423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.257464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.257497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 04:49:42.257534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.257595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 04:49:42.257623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.257750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 04:49:42.257775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.257792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.257870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.257906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.257932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.257947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.257957 | orchestrator | 2025-06-01 04:49:42.257967 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-01 04:49:42.257987 | orchestrator | Sunday 01 June 2025 04:46:52 +0000 (0:00:04.604) 0:03:30.508 *********** 2025-06-01 04:49:42.257998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 04:49:42.258008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 04:49:42.258120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.258188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.258245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.258265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 04:49:42.258291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258301 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.258312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 04:49:42.258375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.258431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.258493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.258504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258518 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.258533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 04:49:42.258549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 04:49:42.258594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.258656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 04:49:42.258706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 04:49:42.258727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 04:49:42.258737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.258747 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.258757 | orchestrator | 2025-06-01 04:49:42.258766 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-01 04:49:42.258776 | orchestrator | Sunday 01 June 2025 04:46:54 +0000 (0:00:02.344) 0:03:32.853 *********** 2025-06-01 04:49:42.258793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 04:49:42.258804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 04:49:42.258813 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.258828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 04:49:42.258838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 04:49:42.258847 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.258862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 04:49:42.258872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 04:49:42.258882 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.258891 | orchestrator | 2025-06-01 04:49:42.258901 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-01 04:49:42.258911 | orchestrator | Sunday 01 June 2025 04:46:57 +0000 (0:00:02.428) 0:03:35.281 *********** 2025-06-01 04:49:42.258920 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.258930 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.258939 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.258949 | orchestrator | 2025-06-01 04:49:42.258958 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-01 04:49:42.258968 | orchestrator | Sunday 01 June 2025 04:46:58 +0000 (0:00:01.330) 0:03:36.611 *********** 2025-06-01 04:49:42.258977 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.258987 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.258996 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.259006 | orchestrator | 2025-06-01 04:49:42.259016 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-01 04:49:42.259025 | orchestrator | Sunday 01 June 2025 04:47:00 +0000 (0:00:02.336) 0:03:38.947 *********** 2025-06-01 04:49:42.259035 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.259044 | orchestrator | 2025-06-01 04:49:42.259054 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-01 04:49:42.259063 | orchestrator | Sunday 01 June 2025 04:47:02 +0000 (0:00:01.359) 0:03:40.307 *********** 2025-06-01 04:49:42.259074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.259084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.259106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.259117 | orchestrator | 2025-06-01 04:49:42.259131 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-01 04:49:42.259141 | orchestrator | Sunday 01 June 2025 04:47:05 +0000 (0:00:03.600) 0:03:43.908 *********** 2025-06-01 04:49:42.259151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.259161 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.259171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.259181 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.259191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.259206 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.259216 | orchestrator | 2025-06-01 04:49:42.259226 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-01 04:49:42.259236 | orchestrator | Sunday 01 June 2025 04:47:06 +0000 (0:00:00.514) 0:03:44.422 *********** 2025-06-01 04:49:42.259245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259266 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.259281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259301 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.259316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259336 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.259365 | orchestrator | 2025-06-01 04:49:42.259376 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-01 04:49:42.259385 | orchestrator | Sunday 01 June 2025 04:47:07 +0000 (0:00:00.786) 0:03:45.209 *********** 2025-06-01 04:49:42.259395 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.259405 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.259415 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.259425 | orchestrator | 2025-06-01 04:49:42.259434 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-01 04:49:42.259444 | orchestrator | Sunday 01 June 2025 04:47:08 +0000 (0:00:01.735) 0:03:46.945 *********** 2025-06-01 04:49:42.259453 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.259463 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.259473 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.259482 | orchestrator | 2025-06-01 04:49:42.259492 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-01 04:49:42.259501 | orchestrator | Sunday 01 June 2025 04:47:11 +0000 (0:00:02.290) 0:03:49.235 *********** 2025-06-01 04:49:42.259511 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.259526 | orchestrator | 2025-06-01 04:49:42.259536 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-01 04:49:42.259546 | orchestrator | Sunday 01 June 2025 04:47:12 +0000 (0:00:01.300) 0:03:50.536 *********** 2025-06-01 04:49:42.259557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.259569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.259612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.259639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259680 | orchestrator | 2025-06-01 04:49:42.259690 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-01 04:49:42.259700 | orchestrator | Sunday 01 June 2025 04:47:17 +0000 (0:00:04.796) 0:03:55.333 *********** 2025-06-01 04:49:42.259710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.259726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259746 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.259767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.259778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259804 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.259814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.259825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.259851 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.259861 | orchestrator | 2025-06-01 04:49:42.259871 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-01 04:49:42.259881 | orchestrator | Sunday 01 June 2025 04:47:18 +0000 (0:00:01.054) 0:03:56.388 *********** 2025-06-01 04:49:42.259891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 04:49:42.259986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260018 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.260028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260068 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.260077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 04:49:42.260117 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.260126 | orchestrator | 2025-06-01 04:49:42.260136 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-01 04:49:42.260146 | orchestrator | Sunday 01 June 2025 04:47:19 +0000 (0:00:00.921) 0:03:57.309 *********** 2025-06-01 04:49:42.260155 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.260165 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.260174 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.260186 | orchestrator | 2025-06-01 04:49:42.260202 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-01 04:49:42.260216 | orchestrator | Sunday 01 June 2025 04:47:20 +0000 (0:00:01.800) 0:03:59.110 *********** 2025-06-01 04:49:42.260226 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.260236 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.260245 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.260255 | orchestrator | 2025-06-01 04:49:42.260265 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-01 04:49:42.260274 | orchestrator | Sunday 01 June 2025 04:47:23 +0000 (0:00:02.162) 0:04:01.273 *********** 2025-06-01 04:49:42.260284 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.260300 | orchestrator | 2025-06-01 04:49:42.260309 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-01 04:49:42.260326 | orchestrator | Sunday 01 June 2025 04:47:24 +0000 (0:00:01.714) 0:04:02.987 *********** 2025-06-01 04:49:42.260336 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-01 04:49:42.260406 | orchestrator | 2025-06-01 04:49:42.260418 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-01 04:49:42.260428 | orchestrator | Sunday 01 June 2025 04:47:26 +0000 (0:00:01.288) 0:04:04.276 *********** 2025-06-01 04:49:42.260443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 04:49:42.260454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 04:49:42.260465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 04:49:42.260475 | orchestrator | 2025-06-01 04:49:42.260484 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-01 04:49:42.260494 | orchestrator | Sunday 01 June 2025 04:47:30 +0000 (0:00:03.985) 0:04:08.262 *********** 2025-06-01 04:49:42.260505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.260515 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.260525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.260534 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.260544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.260561 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.260571 | orchestrator | 2025-06-01 04:49:42.260580 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-01 04:49:42.260590 | orchestrator | Sunday 01 June 2025 04:47:31 +0000 (0:00:01.319) 0:04:09.581 *********** 2025-06-01 04:49:42.260606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['2025-06-01 04:49:42 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state STARTED 2025-06-01 04:49:42.260617 | orchestrator | 2025-06-01 04:49:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:42.260773 | orchestrator | timeout tunnel 1h']}})  2025-06-01 04:49:42.260794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 04:49:42.260806 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.260816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 04:49:42.260831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 04:49:42.260841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 04:49:42.260851 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.260861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 04:49:42.260871 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.260881 | orchestrator | 2025-06-01 04:49:42.260891 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 04:49:42.260901 | orchestrator | Sunday 01 June 2025 04:47:33 +0000 (0:00:01.934) 0:04:11.516 *********** 2025-06-01 04:49:42.260910 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.260918 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.260926 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.260933 | orchestrator | 2025-06-01 04:49:42.260942 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 04:49:42.260949 | orchestrator | Sunday 01 June 2025 04:47:35 +0000 (0:00:02.519) 0:04:14.036 *********** 2025-06-01 04:49:42.260957 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.260965 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.260973 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.260981 | orchestrator | 2025-06-01 04:49:42.260989 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-01 04:49:42.260997 | orchestrator | Sunday 01 June 2025 04:47:38 +0000 (0:00:03.044) 0:04:17.081 *********** 2025-06-01 04:49:42.261005 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-01 04:49:42.261013 | orchestrator | 2025-06-01 04:49:42.261021 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-01 04:49:42.261035 | orchestrator | Sunday 01 June 2025 04:47:39 +0000 (0:00:00.854) 0:04:17.935 *********** 2025-06-01 04:49:42.261044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.261052 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.261060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.261069 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.261096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.261106 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.261114 | orchestrator | 2025-06-01 04:49:42.261122 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-01 04:49:42.261134 | orchestrator | Sunday 01 June 2025 04:47:41 +0000 (0:00:01.423) 0:04:19.359 *********** 2025-06-01 04:49:42.261143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.261151 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.261159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.261167 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.261175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 04:49:42.261189 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.261197 | orchestrator | 2025-06-01 04:49:42.261205 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-01 04:49:42.261213 | orchestrator | Sunday 01 June 2025 04:47:42 +0000 (0:00:01.718) 0:04:21.077 *********** 2025-06-01 04:49:42.261220 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.261228 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.261236 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.261244 | orchestrator | 2025-06-01 04:49:42.261252 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 04:49:42.261259 | orchestrator | Sunday 01 June 2025 04:47:44 +0000 (0:00:01.207) 0:04:22.285 *********** 2025-06-01 04:49:42.261267 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.261275 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.261283 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.261291 | orchestrator | 2025-06-01 04:49:42.261299 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 04:49:42.261307 | orchestrator | Sunday 01 June 2025 04:47:46 +0000 (0:00:02.618) 0:04:24.903 *********** 2025-06-01 04:49:42.261315 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.261323 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.261331 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.261338 | orchestrator | 2025-06-01 04:49:42.261366 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-01 04:49:42.261376 | orchestrator | Sunday 01 June 2025 04:47:49 +0000 (0:00:03.152) 0:04:28.056 *********** 2025-06-01 04:49:42.261386 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-01 04:49:42.261395 | orchestrator | 2025-06-01 04:49:42.261404 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-01 04:49:42.261413 | orchestrator | Sunday 01 June 2025 04:47:51 +0000 (0:00:01.183) 0:04:29.239 *********** 2025-06-01 04:49:42.261423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 04:49:42.261432 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.261465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 04:49:42.261476 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.261486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 04:49:42.261496 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.261506 | orchestrator | 2025-06-01 04:49:42.261521 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-01 04:49:42.261530 | orchestrator | Sunday 01 June 2025 04:47:52 +0000 (0:00:01.042) 0:04:30.282 *********** 2025-06-01 04:49:42.261539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 04:49:42.261549 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.261559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 04:49:42.261569 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.261578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 04:49:42.261588 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.261597 | orchestrator | 2025-06-01 04:49:42.261606 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-01 04:49:42.261616 | orchestrator | Sunday 01 June 2025 04:47:53 +0000 (0:00:01.283) 0:04:31.566 *********** 2025-06-01 04:49:42.261625 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.261634 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.261643 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.261652 | orchestrator | 2025-06-01 04:49:42.261661 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 04:49:42.261671 | orchestrator | Sunday 01 June 2025 04:47:55 +0000 (0:00:01.957) 0:04:33.523 *********** 2025-06-01 04:49:42.261680 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.261689 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.261699 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.261708 | orchestrator | 2025-06-01 04:49:42.261717 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 04:49:42.261725 | orchestrator | Sunday 01 June 2025 04:47:57 +0000 (0:00:02.351) 0:04:35.875 *********** 2025-06-01 04:49:42.261733 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.261740 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.261748 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.261756 | orchestrator | 2025-06-01 04:49:42.261764 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-01 04:49:42.261772 | orchestrator | Sunday 01 June 2025 04:48:00 +0000 (0:00:03.240) 0:04:39.116 *********** 2025-06-01 04:49:42.261779 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.261787 | orchestrator | 2025-06-01 04:49:42.261795 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-01 04:49:42.261803 | orchestrator | Sunday 01 June 2025 04:48:02 +0000 (0:00:01.405) 0:04:40.521 *********** 2025-06-01 04:49:42.261836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.261852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 04:49:42.261861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.261870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.261879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 04:49:42.261887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.261922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.261932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.261940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.261948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.261957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.261965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 04:49:42.262002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.262055 | orchestrator | 2025-06-01 04:49:42.262063 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-01 04:49:42.262071 | orchestrator | Sunday 01 June 2025 04:48:06 +0000 (0:00:03.883) 0:04:44.405 *********** 2025-06-01 04:49:42.262079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.262088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 04:49:42.262096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.262153 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.262161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.262170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 04:49:42.262178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.262225 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.262238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.262247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 04:49:42.262255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 04:49:42.262272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 04:49:42.262285 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.262293 | orchestrator | 2025-06-01 04:49:42.262301 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-01 04:49:42.262309 | orchestrator | Sunday 01 June 2025 04:48:06 +0000 (0:00:00.743) 0:04:45.149 *********** 2025-06-01 04:49:42.262317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 04:49:42.262325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 04:49:42.262334 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.262375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 04:49:42.262389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 04:49:42.262398 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.262406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 04:49:42.262414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 04:49:42.262422 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.262430 | orchestrator | 2025-06-01 04:49:42.262437 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-01 04:49:42.262445 | orchestrator | Sunday 01 June 2025 04:48:07 +0000 (0:00:00.945) 0:04:46.094 *********** 2025-06-01 04:49:42.262453 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.262461 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.262469 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.262477 | orchestrator | 2025-06-01 04:49:42.262485 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-01 04:49:42.262492 | orchestrator | Sunday 01 June 2025 04:48:09 +0000 (0:00:01.941) 0:04:48.036 *********** 2025-06-01 04:49:42.262500 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.262508 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.262516 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.262524 | orchestrator | 2025-06-01 04:49:42.262532 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-01 04:49:42.262540 | orchestrator | Sunday 01 June 2025 04:48:12 +0000 (0:00:02.170) 0:04:50.207 *********** 2025-06-01 04:49:42.262548 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.262556 | orchestrator | 2025-06-01 04:49:42.262564 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-01 04:49:42.262572 | orchestrator | Sunday 01 June 2025 04:48:13 +0000 (0:00:01.382) 0:04:51.589 *********** 2025-06-01 04:49:42.262580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:49:42.262594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:49:42.262621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:49:42.262634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:49:42.262644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:49:42.262659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:49:42.262667 | orchestrator | 2025-06-01 04:49:42.262675 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-01 04:49:42.262683 | orchestrator | Sunday 01 June 2025 04:48:19 +0000 (0:00:05.697) 0:04:57.286 *********** 2025-06-01 04:49:42.262713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:49:42.262723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:49:42.262731 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.262739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:49:42.262753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:49:42.262762 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.262787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:49:42.262801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:49:42.262809 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.262817 | orchestrator | 2025-06-01 04:49:42.262825 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-01 04:49:42.262833 | orchestrator | Sunday 01 June 2025 04:48:20 +0000 (0:00:01.098) 0:04:58.385 *********** 2025-06-01 04:49:42.262841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 04:49:42.262854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 04:49:42.262863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 04:49:42.262871 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.262879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 04:49:42.262887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 04:49:42.262896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 04:49:42.262904 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.262912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 04:49:42.262920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 04:49:42.262928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 04:49:42.262936 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.262944 | orchestrator | 2025-06-01 04:49:42.262952 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-01 04:49:42.262960 | orchestrator | Sunday 01 June 2025 04:48:21 +0000 (0:00:00.932) 0:04:59.318 *********** 2025-06-01 04:49:42.262967 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.262975 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.262983 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.262991 | orchestrator | 2025-06-01 04:49:42.262999 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-01 04:49:42.263007 | orchestrator | Sunday 01 June 2025 04:48:21 +0000 (0:00:00.498) 0:04:59.816 *********** 2025-06-01 04:49:42.263014 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.263022 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.263030 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.263038 | orchestrator | 2025-06-01 04:49:42.263063 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-01 04:49:42.263072 | orchestrator | Sunday 01 June 2025 04:48:23 +0000 (0:00:01.463) 0:05:01.280 *********** 2025-06-01 04:49:42.263080 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.263088 | orchestrator | 2025-06-01 04:49:42.263102 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-01 04:49:42.263110 | orchestrator | Sunday 01 June 2025 04:48:24 +0000 (0:00:01.789) 0:05:03.069 *********** 2025-06-01 04:49:42.263118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:49:42.263132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:49:42.263141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:49:42.263149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:49:42.263158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:49:42.263245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:49:42.263253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:49:42.263299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:49:42.263308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 04:49:42.263316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 04:49:42.263387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:49:42.263453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 04:49:42.263462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263487 | orchestrator | 2025-06-01 04:49:42.263495 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-01 04:49:42.263503 | orchestrator | Sunday 01 June 2025 04:48:29 +0000 (0:00:04.312) 0:05:07.382 *********** 2025-06-01 04:49:42.263511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 04:49:42.263519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:49:42.263540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 04:49:42.263575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 04:49:42.263583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263621 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.263629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 04:49:42.263638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:49:42.263647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 04:49:42.263697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 04:49:42.263705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263730 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.263738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 04:49:42.263755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:49:42.263767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 04:49:42.263801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 04:49:42.263814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:49:42.263838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:49:42.263847 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.263855 | orchestrator | 2025-06-01 04:49:42.263863 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-01 04:49:42.263871 | orchestrator | Sunday 01 June 2025 04:48:30 +0000 (0:00:01.649) 0:05:09.032 *********** 2025-06-01 04:49:42.263879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 04:49:42.263887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 04:49:42.263896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 04:49:42.263904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 04:49:42.263913 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.263921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 04:49:42.263929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 04:49:42.263948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 04:49:42.263960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 04:49:42.263971 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.263981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 04:49:42.263991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 04:49:42.264003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 04:49:42.264020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 04:49:42.264031 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264042 | orchestrator | 2025-06-01 04:49:42.264051 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-01 04:49:42.264061 | orchestrator | Sunday 01 June 2025 04:48:31 +0000 (0:00:01.009) 0:05:10.042 *********** 2025-06-01 04:49:42.264068 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264075 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264081 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264088 | orchestrator | 2025-06-01 04:49:42.264094 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-01 04:49:42.264101 | orchestrator | Sunday 01 June 2025 04:48:32 +0000 (0:00:00.416) 0:05:10.458 *********** 2025-06-01 04:49:42.264108 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264114 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264121 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264128 | orchestrator | 2025-06-01 04:49:42.264134 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-01 04:49:42.264141 | orchestrator | Sunday 01 June 2025 04:48:34 +0000 (0:00:01.792) 0:05:12.251 *********** 2025-06-01 04:49:42.264148 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.264154 | orchestrator | 2025-06-01 04:49:42.264161 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-01 04:49:42.264167 | orchestrator | Sunday 01 June 2025 04:48:35 +0000 (0:00:01.732) 0:05:13.983 *********** 2025-06-01 04:49:42.264174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:49:42.264264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:49:42.264279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 04:49:42.264286 | orchestrator | 2025-06-01 04:49:42.264298 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-01 04:49:42.264305 | orchestrator | Sunday 01 June 2025 04:48:38 +0000 (0:00:02.645) 0:05:16.629 *********** 2025-06-01 04:49:42.264316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 04:49:42.264324 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 04:49:42.264359 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 04:49:42.264382 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264389 | orchestrator | 2025-06-01 04:49:42.264396 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-01 04:49:42.264403 | orchestrator | Sunday 01 June 2025 04:48:38 +0000 (0:00:00.418) 0:05:17.047 *********** 2025-06-01 04:49:42.264411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 04:49:42.264423 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 04:49:42.264444 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 04:49:42.264457 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264464 | orchestrator | 2025-06-01 04:49:42.264471 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-01 04:49:42.264477 | orchestrator | Sunday 01 June 2025 04:48:39 +0000 (0:00:01.017) 0:05:18.065 *********** 2025-06-01 04:49:42.264488 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264495 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264502 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264508 | orchestrator | 2025-06-01 04:49:42.264515 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-01 04:49:42.264522 | orchestrator | Sunday 01 June 2025 04:48:40 +0000 (0:00:00.454) 0:05:18.520 *********** 2025-06-01 04:49:42.264532 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264539 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264546 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264552 | orchestrator | 2025-06-01 04:49:42.264559 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-01 04:49:42.264566 | orchestrator | Sunday 01 June 2025 04:48:41 +0000 (0:00:01.423) 0:05:19.943 *********** 2025-06-01 04:49:42.264572 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:49:42.264584 | orchestrator | 2025-06-01 04:49:42.264591 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-01 04:49:42.264597 | orchestrator | Sunday 01 June 2025 04:48:43 +0000 (0:00:01.739) 0:05:21.683 *********** 2025-06-01 04:49:42.264608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.264617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.264624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.264638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.264646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.264658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 04:49:42.264665 | orchestrator | 2025-06-01 04:49:42.264671 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-01 04:49:42.264678 | orchestrator | Sunday 01 June 2025 04:48:49 +0000 (0:00:06.051) 0:05:27.734 *********** 2025-06-01 04:49:42.264685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.264696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.264707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.264720 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.264734 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.264748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 04:49:42.264755 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264762 | orchestrator | 2025-06-01 04:49:42.264796 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-01 04:49:42.264813 | orchestrator | Sunday 01 June 2025 04:48:50 +0000 (0:00:00.674) 0:05:28.409 *********** 2025-06-01 04:49:42.264820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264852 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.264859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264886 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.264893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 04:49:42.264920 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.264927 | orchestrator | 2025-06-01 04:49:42.264934 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-01 04:49:42.264940 | orchestrator | Sunday 01 June 2025 04:48:52 +0000 (0:00:01.814) 0:05:30.223 *********** 2025-06-01 04:49:42.264947 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.264953 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.264960 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.264967 | orchestrator | 2025-06-01 04:49:42.264973 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-01 04:49:42.264980 | orchestrator | Sunday 01 June 2025 04:48:53 +0000 (0:00:01.385) 0:05:31.609 *********** 2025-06-01 04:49:42.264987 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.264997 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.265004 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.265011 | orchestrator | 2025-06-01 04:49:42.265017 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-01 04:49:42.265024 | orchestrator | Sunday 01 June 2025 04:48:55 +0000 (0:00:02.224) 0:05:33.833 *********** 2025-06-01 04:49:42.265031 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265037 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265044 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265050 | orchestrator | 2025-06-01 04:49:42.265057 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-01 04:49:42.265064 | orchestrator | Sunday 01 June 2025 04:48:56 +0000 (0:00:00.336) 0:05:34.169 *********** 2025-06-01 04:49:42.265070 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265077 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265083 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265090 | orchestrator | 2025-06-01 04:49:42.265097 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-01 04:49:42.265107 | orchestrator | Sunday 01 June 2025 04:48:56 +0000 (0:00:00.309) 0:05:34.479 *********** 2025-06-01 04:49:42.265114 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265120 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265127 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265134 | orchestrator | 2025-06-01 04:49:42.265140 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-01 04:49:42.265147 | orchestrator | Sunday 01 June 2025 04:48:57 +0000 (0:00:00.726) 0:05:35.206 *********** 2025-06-01 04:49:42.265154 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265160 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265167 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265174 | orchestrator | 2025-06-01 04:49:42.265223 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-01 04:49:42.265238 | orchestrator | Sunday 01 June 2025 04:48:57 +0000 (0:00:00.321) 0:05:35.528 *********** 2025-06-01 04:49:42.265245 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265252 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265258 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265265 | orchestrator | 2025-06-01 04:49:42.265271 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-01 04:49:42.265278 | orchestrator | Sunday 01 June 2025 04:48:57 +0000 (0:00:00.336) 0:05:35.864 *********** 2025-06-01 04:49:42.265285 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265291 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265298 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265304 | orchestrator | 2025-06-01 04:49:42.265311 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-01 04:49:42.265318 | orchestrator | Sunday 01 June 2025 04:48:58 +0000 (0:00:00.962) 0:05:36.827 *********** 2025-06-01 04:49:42.265324 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265331 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265338 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265362 | orchestrator | 2025-06-01 04:49:42.265371 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-01 04:49:42.265378 | orchestrator | Sunday 01 June 2025 04:48:59 +0000 (0:00:00.698) 0:05:37.526 *********** 2025-06-01 04:49:42.265384 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265391 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265397 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265404 | orchestrator | 2025-06-01 04:49:42.265410 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-01 04:49:42.265417 | orchestrator | Sunday 01 June 2025 04:48:59 +0000 (0:00:00.360) 0:05:37.886 *********** 2025-06-01 04:49:42.265424 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265430 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265441 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265448 | orchestrator | 2025-06-01 04:49:42.265455 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-01 04:49:42.265461 | orchestrator | Sunday 01 June 2025 04:49:00 +0000 (0:00:00.957) 0:05:38.844 *********** 2025-06-01 04:49:42.265468 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265474 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265481 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265488 | orchestrator | 2025-06-01 04:49:42.265494 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-01 04:49:42.265501 | orchestrator | Sunday 01 June 2025 04:49:02 +0000 (0:00:01.333) 0:05:40.177 *********** 2025-06-01 04:49:42.265508 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265514 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265521 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265527 | orchestrator | 2025-06-01 04:49:42.265534 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-01 04:49:42.265540 | orchestrator | Sunday 01 June 2025 04:49:02 +0000 (0:00:00.873) 0:05:41.051 *********** 2025-06-01 04:49:42.265547 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.265553 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.265560 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.265566 | orchestrator | 2025-06-01 04:49:42.265573 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-01 04:49:42.265579 | orchestrator | Sunday 01 June 2025 04:49:11 +0000 (0:00:08.412) 0:05:49.464 *********** 2025-06-01 04:49:42.265586 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265593 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265599 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265605 | orchestrator | 2025-06-01 04:49:42.265612 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-01 04:49:42.265618 | orchestrator | Sunday 01 June 2025 04:49:12 +0000 (0:00:00.762) 0:05:50.227 *********** 2025-06-01 04:49:42.265625 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.265632 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.265638 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.265645 | orchestrator | 2025-06-01 04:49:42.265651 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-01 04:49:42.265658 | orchestrator | Sunday 01 June 2025 04:49:25 +0000 (0:00:13.559) 0:06:03.787 *********** 2025-06-01 04:49:42.265664 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.265671 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.265678 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.265684 | orchestrator | 2025-06-01 04:49:42.265691 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-01 04:49:42.265697 | orchestrator | Sunday 01 June 2025 04:49:26 +0000 (0:00:00.773) 0:06:04.561 *********** 2025-06-01 04:49:42.265704 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:49:42.265711 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:49:42.265717 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:49:42.265724 | orchestrator | 2025-06-01 04:49:42.265730 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-01 04:49:42.265738 | orchestrator | Sunday 01 June 2025 04:49:30 +0000 (0:00:04.480) 0:06:09.041 *********** 2025-06-01 04:49:42.265744 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265751 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265757 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265791 | orchestrator | 2025-06-01 04:49:42.265799 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-01 04:49:42.265806 | orchestrator | Sunday 01 June 2025 04:49:31 +0000 (0:00:00.369) 0:06:09.411 *********** 2025-06-01 04:49:42.265813 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265825 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265832 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265839 | orchestrator | 2025-06-01 04:49:42.265850 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-01 04:49:42.265857 | orchestrator | Sunday 01 June 2025 04:49:32 +0000 (0:00:00.793) 0:06:10.205 *********** 2025-06-01 04:49:42.265864 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265874 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265881 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265887 | orchestrator | 2025-06-01 04:49:42.265894 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-01 04:49:42.265901 | orchestrator | Sunday 01 June 2025 04:49:32 +0000 (0:00:00.402) 0:06:10.607 *********** 2025-06-01 04:49:42.265907 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265914 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265920 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265927 | orchestrator | 2025-06-01 04:49:42.265934 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-01 04:49:42.265940 | orchestrator | Sunday 01 June 2025 04:49:32 +0000 (0:00:00.408) 0:06:11.016 *********** 2025-06-01 04:49:42.265947 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265953 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265960 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.265966 | orchestrator | 2025-06-01 04:49:42.265973 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-01 04:49:42.265979 | orchestrator | Sunday 01 June 2025 04:49:33 +0000 (0:00:00.381) 0:06:11.397 *********** 2025-06-01 04:49:42.265986 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:49:42.265992 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:49:42.265999 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:49:42.266006 | orchestrator | 2025-06-01 04:49:42.266012 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-01 04:49:42.266045 | orchestrator | Sunday 01 June 2025 04:49:34 +0000 (0:00:00.809) 0:06:12.207 *********** 2025-06-01 04:49:42.266052 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.266058 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.266065 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.266072 | orchestrator | 2025-06-01 04:49:42.266078 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-01 04:49:42.266085 | orchestrator | Sunday 01 June 2025 04:49:38 +0000 (0:00:04.801) 0:06:17.009 *********** 2025-06-01 04:49:42.266091 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:49:42.266098 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:49:42.266104 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:49:42.266111 | orchestrator | 2025-06-01 04:49:42.266117 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:49:42.266124 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 04:49:42.266132 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 04:49:42.266139 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 04:49:42.266145 | orchestrator | 2025-06-01 04:49:42.266152 | orchestrator | 2025-06-01 04:49:42.266159 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:49:42.266165 | orchestrator | Sunday 01 June 2025 04:49:39 +0000 (0:00:00.894) 0:06:17.903 *********** 2025-06-01 04:49:42.266172 | orchestrator | =============================================================================== 2025-06-01 04:49:42.266179 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.56s 2025-06-01 04:49:42.266186 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.41s 2025-06-01 04:49:42.266192 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.05s 2025-06-01 04:49:42.266199 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.70s 2025-06-01 04:49:42.266210 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.80s 2025-06-01 04:49:42.266217 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.80s 2025-06-01 04:49:42.266224 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.63s 2025-06-01 04:49:42.266230 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.60s 2025-06-01 04:49:42.266237 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.48s 2025-06-01 04:49:42.266279 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.44s 2025-06-01 04:49:42.266287 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.40s 2025-06-01 04:49:42.266293 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.37s 2025-06-01 04:49:42.266300 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.31s 2025-06-01 04:49:42.266307 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.31s 2025-06-01 04:49:42.266313 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.13s 2025-06-01 04:49:42.266320 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.10s 2025-06-01 04:49:42.266326 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.05s 2025-06-01 04:49:42.266333 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.04s 2025-06-01 04:49:42.266340 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.99s 2025-06-01 04:49:42.266451 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.93s 2025-06-01 04:49:45.295880 | orchestrator | 2025-06-01 04:49:45 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:49:45.298866 | orchestrator | 2025-06-01 04:49:45 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:49:45.299572 | orchestrator | 2025-06-01 04:49:45 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:45.299995 | orchestrator | 2025-06-01 04:49:45 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state STARTED 2025-06-01 04:49:45.300025 | orchestrator | 2025-06-01 04:49:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:48.356921 | orchestrator | 2025-06-01 04:49:48 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:49:48.358192 | orchestrator | 2025-06-01 04:49:48 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:49:48.359763 | orchestrator | 2025-06-01 04:49:48 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:48.361131 | orchestrator | 2025-06-01 04:49:48 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state STARTED 2025-06-01 04:49:48.361176 | orchestrator | 2025-06-01 04:49:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:51.400516 | orchestrator | 2025-06-01 04:49:51 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:49:51.401304 | orchestrator | 2025-06-01 04:49:51 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:49:51.401866 | orchestrator | 2025-06-01 04:49:51 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:51.403234 | orchestrator | 2025-06-01 04:49:51 | INFO  | Task 4caeb60c-7d37-4d70-baa1-dbd80130785f is in state SUCCESS 2025-06-01 04:49:51.403591 | orchestrator | 2025-06-01 04:49:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:54.449722 | orchestrator | 2025-06-01 04:49:54 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:49:54.452546 | orchestrator | 2025-06-01 04:49:54 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:49:54.453840 | orchestrator | 2025-06-01 04:49:54 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:54.453916 | orchestrator | 2025-06-01 04:49:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:49:57.492343 | orchestrator | 2025-06-01 04:49:57 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:49:57.492610 | orchestrator | 2025-06-01 04:49:57 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:49:57.493426 | orchestrator | 2025-06-01 04:49:57 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:49:57.493453 | orchestrator | 2025-06-01 04:49:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:00.539816 | orchestrator | 2025-06-01 04:50:00 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:00.539949 | orchestrator | 2025-06-01 04:50:00 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:00.540085 | orchestrator | 2025-06-01 04:50:00 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:00.540316 | orchestrator | 2025-06-01 04:50:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:03.576715 | orchestrator | 2025-06-01 04:50:03 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:03.578548 | orchestrator | 2025-06-01 04:50:03 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:03.580253 | orchestrator | 2025-06-01 04:50:03 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:03.580608 | orchestrator | 2025-06-01 04:50:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:06.641213 | orchestrator | 2025-06-01 04:50:06 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:06.642430 | orchestrator | 2025-06-01 04:50:06 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:06.643312 | orchestrator | 2025-06-01 04:50:06 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:06.643351 | orchestrator | 2025-06-01 04:50:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:09.689281 | orchestrator | 2025-06-01 04:50:09 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:09.690164 | orchestrator | 2025-06-01 04:50:09 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:09.692326 | orchestrator | 2025-06-01 04:50:09 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:09.692423 | orchestrator | 2025-06-01 04:50:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:12.744012 | orchestrator | 2025-06-01 04:50:12 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:12.746660 | orchestrator | 2025-06-01 04:50:12 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:12.748857 | orchestrator | 2025-06-01 04:50:12 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:12.749292 | orchestrator | 2025-06-01 04:50:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:15.796673 | orchestrator | 2025-06-01 04:50:15 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:15.796942 | orchestrator | 2025-06-01 04:50:15 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:15.798163 | orchestrator | 2025-06-01 04:50:15 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:15.798265 | orchestrator | 2025-06-01 04:50:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:18.850282 | orchestrator | 2025-06-01 04:50:18 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:18.852304 | orchestrator | 2025-06-01 04:50:18 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:18.854199 | orchestrator | 2025-06-01 04:50:18 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:18.854251 | orchestrator | 2025-06-01 04:50:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:21.918912 | orchestrator | 2025-06-01 04:50:21 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:21.919834 | orchestrator | 2025-06-01 04:50:21 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:21.921675 | orchestrator | 2025-06-01 04:50:21 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:21.921759 | orchestrator | 2025-06-01 04:50:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:24.969656 | orchestrator | 2025-06-01 04:50:24 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:24.971885 | orchestrator | 2025-06-01 04:50:24 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:24.974197 | orchestrator | 2025-06-01 04:50:24 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:24.974228 | orchestrator | 2025-06-01 04:50:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:28.031140 | orchestrator | 2025-06-01 04:50:28 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:28.033377 | orchestrator | 2025-06-01 04:50:28 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:28.035193 | orchestrator | 2025-06-01 04:50:28 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:28.035224 | orchestrator | 2025-06-01 04:50:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:31.080923 | orchestrator | 2025-06-01 04:50:31 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:31.085724 | orchestrator | 2025-06-01 04:50:31 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:31.087646 | orchestrator | 2025-06-01 04:50:31 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:31.087696 | orchestrator | 2025-06-01 04:50:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:34.124518 | orchestrator | 2025-06-01 04:50:34 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:34.125954 | orchestrator | 2025-06-01 04:50:34 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:34.126951 | orchestrator | 2025-06-01 04:50:34 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:34.127655 | orchestrator | 2025-06-01 04:50:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:37.191065 | orchestrator | 2025-06-01 04:50:37 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:37.191205 | orchestrator | 2025-06-01 04:50:37 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:37.192780 | orchestrator | 2025-06-01 04:50:37 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:37.192879 | orchestrator | 2025-06-01 04:50:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:40.244789 | orchestrator | 2025-06-01 04:50:40 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:40.246291 | orchestrator | 2025-06-01 04:50:40 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:40.247608 | orchestrator | 2025-06-01 04:50:40 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:40.247854 | orchestrator | 2025-06-01 04:50:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:43.294754 | orchestrator | 2025-06-01 04:50:43 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:43.296331 | orchestrator | 2025-06-01 04:50:43 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:43.297803 | orchestrator | 2025-06-01 04:50:43 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:43.297842 | orchestrator | 2025-06-01 04:50:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:46.353174 | orchestrator | 2025-06-01 04:50:46 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:46.355084 | orchestrator | 2025-06-01 04:50:46 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:46.356519 | orchestrator | 2025-06-01 04:50:46 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:46.356869 | orchestrator | 2025-06-01 04:50:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:49.412287 | orchestrator | 2025-06-01 04:50:49 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:49.413721 | orchestrator | 2025-06-01 04:50:49 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:49.415909 | orchestrator | 2025-06-01 04:50:49 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:49.416160 | orchestrator | 2025-06-01 04:50:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:52.474488 | orchestrator | 2025-06-01 04:50:52 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:52.475913 | orchestrator | 2025-06-01 04:50:52 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:52.479006 | orchestrator | 2025-06-01 04:50:52 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:52.479096 | orchestrator | 2025-06-01 04:50:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:55.519929 | orchestrator | 2025-06-01 04:50:55 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:55.520616 | orchestrator | 2025-06-01 04:50:55 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:55.522168 | orchestrator | 2025-06-01 04:50:55 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:55.522195 | orchestrator | 2025-06-01 04:50:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:50:58.575586 | orchestrator | 2025-06-01 04:50:58 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:50:58.577090 | orchestrator | 2025-06-01 04:50:58 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:50:58.578949 | orchestrator | 2025-06-01 04:50:58 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:50:58.579038 | orchestrator | 2025-06-01 04:50:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:01.630339 | orchestrator | 2025-06-01 04:51:01 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:01.632416 | orchestrator | 2025-06-01 04:51:01 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:01.633349 | orchestrator | 2025-06-01 04:51:01 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:01.633638 | orchestrator | 2025-06-01 04:51:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:04.683833 | orchestrator | 2025-06-01 04:51:04 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:04.684587 | orchestrator | 2025-06-01 04:51:04 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:04.686335 | orchestrator | 2025-06-01 04:51:04 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:04.686416 | orchestrator | 2025-06-01 04:51:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:07.738624 | orchestrator | 2025-06-01 04:51:07 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:07.740038 | orchestrator | 2025-06-01 04:51:07 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:07.741775 | orchestrator | 2025-06-01 04:51:07 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:07.742113 | orchestrator | 2025-06-01 04:51:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:10.814712 | orchestrator | 2025-06-01 04:51:10 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:10.815765 | orchestrator | 2025-06-01 04:51:10 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:10.817816 | orchestrator | 2025-06-01 04:51:10 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:10.818755 | orchestrator | 2025-06-01 04:51:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:13.868885 | orchestrator | 2025-06-01 04:51:13 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:13.870524 | orchestrator | 2025-06-01 04:51:13 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:13.872623 | orchestrator | 2025-06-01 04:51:13 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:13.872658 | orchestrator | 2025-06-01 04:51:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:16.927807 | orchestrator | 2025-06-01 04:51:16 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:16.934196 | orchestrator | 2025-06-01 04:51:16 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:16.936470 | orchestrator | 2025-06-01 04:51:16 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:16.936537 | orchestrator | 2025-06-01 04:51:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:19.991824 | orchestrator | 2025-06-01 04:51:19 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:19.994009 | orchestrator | 2025-06-01 04:51:19 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:19.995691 | orchestrator | 2025-06-01 04:51:19 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:19.995734 | orchestrator | 2025-06-01 04:51:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:23.044044 | orchestrator | 2025-06-01 04:51:23 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:23.045279 | orchestrator | 2025-06-01 04:51:23 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:23.046935 | orchestrator | 2025-06-01 04:51:23 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:23.046999 | orchestrator | 2025-06-01 04:51:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:26.098061 | orchestrator | 2025-06-01 04:51:26 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:26.100808 | orchestrator | 2025-06-01 04:51:26 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:26.102939 | orchestrator | 2025-06-01 04:51:26 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:26.103022 | orchestrator | 2025-06-01 04:51:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:29.148077 | orchestrator | 2025-06-01 04:51:29 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:29.149157 | orchestrator | 2025-06-01 04:51:29 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:29.153565 | orchestrator | 2025-06-01 04:51:29 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:29.153636 | orchestrator | 2025-06-01 04:51:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:32.197072 | orchestrator | 2025-06-01 04:51:32 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:32.198833 | orchestrator | 2025-06-01 04:51:32 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:32.200750 | orchestrator | 2025-06-01 04:51:32 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:32.200816 | orchestrator | 2025-06-01 04:51:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:35.254454 | orchestrator | 2025-06-01 04:51:35 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:35.255912 | orchestrator | 2025-06-01 04:51:35 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:35.257045 | orchestrator | 2025-06-01 04:51:35 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state STARTED 2025-06-01 04:51:35.257312 | orchestrator | 2025-06-01 04:51:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:38.304688 | orchestrator | 2025-06-01 04:51:38 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:38.306605 | orchestrator | 2025-06-01 04:51:38 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:38.313635 | orchestrator | 2025-06-01 04:51:38 | INFO  | Task 53faae21-7b35-4351-b039-d6e002e0c144 is in state SUCCESS 2025-06-01 04:51:38.316078 | orchestrator | 2025-06-01 04:51:38.316123 | orchestrator | None 2025-06-01 04:51:38.316135 | orchestrator | 2025-06-01 04:51:38.316147 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-01 04:51:38.316286 | orchestrator | 2025-06-01 04:51:38.316298 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-01 04:51:38.316309 | orchestrator | Sunday 01 June 2025 04:40:58 +0000 (0:00:00.690) 0:00:00.690 *********** 2025-06-01 04:51:38.316322 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.316336 | orchestrator | 2025-06-01 04:51:38.316347 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-01 04:51:38.316384 | orchestrator | Sunday 01 June 2025 04:40:59 +0000 (0:00:01.113) 0:00:01.804 *********** 2025-06-01 04:51:38.316396 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.316408 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.316419 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.316504 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.316545 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.316556 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.316567 | orchestrator | 2025-06-01 04:51:38.316578 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-01 04:51:38.316590 | orchestrator | Sunday 01 June 2025 04:41:00 +0000 (0:00:01.460) 0:00:03.264 *********** 2025-06-01 04:51:38.316600 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.316611 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.316622 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.316633 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.316643 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.316655 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.316674 | orchestrator | 2025-06-01 04:51:38.316698 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-01 04:51:38.316775 | orchestrator | Sunday 01 June 2025 04:41:01 +0000 (0:00:00.827) 0:00:04.091 *********** 2025-06-01 04:51:38.316797 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.316816 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.316834 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.316852 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.316870 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.316889 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.317197 | orchestrator | 2025-06-01 04:51:38.317218 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-01 04:51:38.317238 | orchestrator | Sunday 01 June 2025 04:41:02 +0000 (0:00:01.058) 0:00:05.150 *********** 2025-06-01 04:51:38.317257 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.317276 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.317295 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.317314 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.317333 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.317352 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.317370 | orchestrator | 2025-06-01 04:51:38.317389 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-01 04:51:38.317409 | orchestrator | Sunday 01 June 2025 04:41:03 +0000 (0:00:00.720) 0:00:05.870 *********** 2025-06-01 04:51:38.317427 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.317446 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.317502 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.317547 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.317566 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.317583 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.317601 | orchestrator | 2025-06-01 04:51:38.317618 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-01 04:51:38.317635 | orchestrator | Sunday 01 June 2025 04:41:03 +0000 (0:00:00.556) 0:00:06.426 *********** 2025-06-01 04:51:38.317653 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.317671 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.317689 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.317706 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.317723 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.317875 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.317975 | orchestrator | 2025-06-01 04:51:38.318185 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-01 04:51:38.318213 | orchestrator | Sunday 01 June 2025 04:41:04 +0000 (0:00:00.985) 0:00:07.412 *********** 2025-06-01 04:51:38.318233 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.318257 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.318276 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.318314 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.318384 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.318404 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.318423 | orchestrator | 2025-06-01 04:51:38.318442 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-01 04:51:38.318461 | orchestrator | Sunday 01 June 2025 04:41:05 +0000 (0:00:00.912) 0:00:08.324 *********** 2025-06-01 04:51:38.318480 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.318593 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.318617 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.318635 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.318653 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.318672 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.318690 | orchestrator | 2025-06-01 04:51:38.318709 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-01 04:51:38.318728 | orchestrator | Sunday 01 June 2025 04:41:06 +0000 (0:00:00.921) 0:00:09.246 *********** 2025-06-01 04:51:38.318808 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:51:38.318829 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:51:38.319234 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:51:38.319260 | orchestrator | 2025-06-01 04:51:38.319277 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-01 04:51:38.319294 | orchestrator | Sunday 01 June 2025 04:41:07 +0000 (0:00:00.647) 0:00:09.893 *********** 2025-06-01 04:51:38.319310 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.319326 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.319344 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.319361 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.319378 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.319396 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.319413 | orchestrator | 2025-06-01 04:51:38.319449 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-01 04:51:38.319465 | orchestrator | Sunday 01 June 2025 04:41:08 +0000 (0:00:01.164) 0:00:11.057 *********** 2025-06-01 04:51:38.319481 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:51:38.319497 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:51:38.319540 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:51:38.319557 | orchestrator | 2025-06-01 04:51:38.319573 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-01 04:51:38.319588 | orchestrator | Sunday 01 June 2025 04:41:11 +0000 (0:00:02.874) 0:00:13.932 *********** 2025-06-01 04:51:38.319636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 04:51:38.319652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 04:51:38.319668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 04:51:38.319725 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.319743 | orchestrator | 2025-06-01 04:51:38.319758 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-01 04:51:38.319774 | orchestrator | Sunday 01 June 2025 04:41:11 +0000 (0:00:00.588) 0:00:14.521 *********** 2025-06-01 04:51:38.319793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.319813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.319829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.319861 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.319877 | orchestrator | 2025-06-01 04:51:38.319893 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-01 04:51:38.319908 | orchestrator | Sunday 01 June 2025 04:41:12 +0000 (0:00:00.916) 0:00:15.437 *********** 2025-06-01 04:51:38.319926 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.319945 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.319962 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.319978 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.319993 | orchestrator | 2025-06-01 04:51:38.320019 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-01 04:51:38.320035 | orchestrator | Sunday 01 June 2025 04:41:13 +0000 (0:00:00.424) 0:00:15.862 *********** 2025-06-01 04:51:38.320051 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-01 04:41:09.112180', 'end': '2025-06-01 04:41:09.380945', 'delta': '0:00:00.268765', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.320096 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-01 04:41:10.132912', 'end': '2025-06-01 04:41:10.383932', 'delta': '0:00:00.251020', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.320114 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-01 04:41:10.929073', 'end': '2025-06-01 04:41:11.194105', 'delta': '0:00:00.265032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.320139 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320154 | orchestrator | 2025-06-01 04:51:38.320170 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-01 04:51:38.320185 | orchestrator | Sunday 01 June 2025 04:41:13 +0000 (0:00:00.245) 0:00:16.108 *********** 2025-06-01 04:51:38.320200 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.320216 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.320231 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.320246 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.320262 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.320277 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.320292 | orchestrator | 2025-06-01 04:51:38.320307 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-01 04:51:38.320323 | orchestrator | Sunday 01 June 2025 04:41:15 +0000 (0:00:01.464) 0:00:17.572 *********** 2025-06-01 04:51:38.320338 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.320354 | orchestrator | 2025-06-01 04:51:38.320391 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-01 04:51:38.320407 | orchestrator | Sunday 01 June 2025 04:41:15 +0000 (0:00:00.598) 0:00:18.170 *********** 2025-06-01 04:51:38.320422 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320436 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.320452 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.320468 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.320483 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.320498 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.320538 | orchestrator | 2025-06-01 04:51:38.320555 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-01 04:51:38.320572 | orchestrator | Sunday 01 June 2025 04:41:16 +0000 (0:00:01.159) 0:00:19.330 *********** 2025-06-01 04:51:38.320588 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320604 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.320613 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.320623 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.320632 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.320641 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.320651 | orchestrator | 2025-06-01 04:51:38.320660 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 04:51:38.320670 | orchestrator | Sunday 01 June 2025 04:41:18 +0000 (0:00:01.255) 0:00:20.585 *********** 2025-06-01 04:51:38.320679 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320689 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.320698 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.320707 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.320730 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.320740 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.320749 | orchestrator | 2025-06-01 04:51:38.320759 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-01 04:51:38.320769 | orchestrator | Sunday 01 June 2025 04:41:18 +0000 (0:00:00.938) 0:00:21.524 *********** 2025-06-01 04:51:38.320778 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320788 | orchestrator | 2025-06-01 04:51:38.320797 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-01 04:51:38.320807 | orchestrator | Sunday 01 June 2025 04:41:19 +0000 (0:00:00.098) 0:00:21.622 *********** 2025-06-01 04:51:38.320816 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320826 | orchestrator | 2025-06-01 04:51:38.320835 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 04:51:38.320845 | orchestrator | Sunday 01 June 2025 04:41:19 +0000 (0:00:00.172) 0:00:21.794 *********** 2025-06-01 04:51:38.320862 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320871 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.320881 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.320890 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.320900 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.320909 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.320919 | orchestrator | 2025-06-01 04:51:38.320928 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-01 04:51:38.320947 | orchestrator | Sunday 01 June 2025 04:41:19 +0000 (0:00:00.668) 0:00:22.463 *********** 2025-06-01 04:51:38.320957 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.320967 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.320976 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.320985 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.320995 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.321004 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.321014 | orchestrator | 2025-06-01 04:51:38.321023 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-01 04:51:38.321033 | orchestrator | Sunday 01 June 2025 04:41:21 +0000 (0:00:01.217) 0:00:23.680 *********** 2025-06-01 04:51:38.321045 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.321061 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.321078 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.321094 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.321110 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.321125 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.321141 | orchestrator | 2025-06-01 04:51:38.321157 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-01 04:51:38.321174 | orchestrator | Sunday 01 June 2025 04:41:21 +0000 (0:00:00.876) 0:00:24.556 *********** 2025-06-01 04:51:38.321191 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.321206 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.321221 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.321231 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.321240 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.321250 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.321259 | orchestrator | 2025-06-01 04:51:38.321269 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-01 04:51:38.321279 | orchestrator | Sunday 01 June 2025 04:41:23 +0000 (0:00:01.128) 0:00:25.685 *********** 2025-06-01 04:51:38.321288 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.321298 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.321307 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.321317 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.321326 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.321336 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.321345 | orchestrator | 2025-06-01 04:51:38.321355 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-01 04:51:38.321365 | orchestrator | Sunday 01 June 2025 04:41:23 +0000 (0:00:00.720) 0:00:26.406 *********** 2025-06-01 04:51:38.321374 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.321384 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.321393 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.321403 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.321413 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.321422 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.321432 | orchestrator | 2025-06-01 04:51:38.321441 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-01 04:51:38.321451 | orchestrator | Sunday 01 June 2025 04:41:24 +0000 (0:00:00.667) 0:00:27.073 *********** 2025-06-01 04:51:38.321461 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.321470 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.321487 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.321496 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.321506 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.321555 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.321566 | orchestrator | 2025-06-01 04:51:38.321576 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-01 04:51:38.321585 | orchestrator | Sunday 01 June 2025 04:41:25 +0000 (0:00:00.642) 0:00:27.716 *********** 2025-06-01 04:51:38.321596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part1', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part14', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part15', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part16', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.321728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.321740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part1', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part14', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part15', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part16', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.321857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.321867 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.321877 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.321888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01', 'dm-uuid-LVM-1eUOzdbAnujbrmmQbf1u8TWwCKKehc4EsW3O8lHP2AY4FoheEDAi3yxRewteMMBh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854', 'dm-uuid-LVM-jvbLPog2454BR2VqTPTDTQuqD0m7XmJHNq8L9Bml09d5fS7mp2MKgWxLY5pba4oZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.321996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part1', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part14', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part15', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part16', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SfdbD4-DQeU-upZX-fFei-KrR8-spZ2-2tSadc', 'scsi-0QEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85', 'scsi-SQEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5Okb9-7wiI-AUzs-6xEc-WeRK-3xcZ-hI4vGp', 'scsi-0QEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087', 'scsi-SQEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9', 'scsi-SQEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c', 'dm-uuid-LVM-PRLwnxcVzIsP7Q3HfzFKwdTPz1uGc6nycVh0jSEwLU2kbU5DsKCWhKIa7fzmgY4T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be', 'dm-uuid-LVM-AqU225ITWkMhxioP4SNN3vtZuUgxHr2CFmlfDeotkO8E502IVpeU2uNXBPoSaqMR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322274 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.322284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322309 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.322319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f', 'dm-uuid-LVM-ScHrvNPr8qDyCeO4x5OiVfWTfDnUmC7SHZYBYTkTtP6D42HpChnXEPORdGms420C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9', 'dm-uuid-LVM-h6G5GzXBE45l6hxKniWXpOW1h9rmmErUiA7TRJwQlqicY2yDsAM0il518CF0D2fU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-r4SpB9-BCLC-eYHP-lMrq-wCSy-3vhG-ZRqCC7', 'scsi-0QEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c', 'scsi-SQEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DLl1Fq-KyrV-vfYI-RyK1-3lga-eE7q-zypSS7', 'scsi-0QEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79', 'scsi-SQEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:51:38.322674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110', 'scsi-SQEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-73KRxk-M406-MiXW-jgpk-jXkk-l5hx-WvE3Ux', 'scsi-0QEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af', 'scsi-SQEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322734 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.322743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4EBCt5-xfUc-O52C-4B6h-6o6d-D1FV-ne9RND', 'scsi-0QEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c', 'scsi-SQEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2', 'scsi-SQEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:51:38.322782 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.322790 | orchestrator | 2025-06-01 04:51:38.322798 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-01 04:51:38.322806 | orchestrator | Sunday 01 June 2025 04:41:26 +0000 (0:00:01.464) 0:00:29.181 *********** 2025-06-01 04:51:38.322814 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322822 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322831 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322839 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322850 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322887 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322900 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part1', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part14', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part15', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part16', 'scsi-SQEMU_QEMU_HARDDISK_360dd9b1-930d-49be-ab9f-7b080f656ebe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322931 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.322940 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322948 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322956 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322964 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322976 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.322989 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323005 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323013 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323026 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part1', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part14', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part15', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part16', 'scsi-SQEMU_QEMU_HARDDISK_866e2d9c-bebc-4a8f-8f48-25266a5b8758-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323041 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323049 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.323063 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323071 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323080 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323088 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323096 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323115 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323130 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323138 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323147 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part1', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part14', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part15', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part16', 'scsi-SQEMU_QEMU_HARDDISK_f2e291b1-f353-4be0-8ae6-8e4ff272a509-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323164 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01', 'dm-uuid-LVM-1eUOzdbAnujbrmmQbf1u8TWwCKKehc4EsW3O8lHP2AY4FoheEDAi3yxRewteMMBh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854', 'dm-uuid-LVM-jvbLPog2454BR2VqTPTDTQuqD0m7XmJHNq8L9Bml09d5fS7mp2MKgWxLY5pba4oZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323217 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.323229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323266 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c', 'dm-uuid-LVM-PRLwnxcVzIsP7Q3HfzFKwdTPz1uGc6nycVh0jSEwLU2kbU5DsKCWhKIa7fzmgY4T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be', 'dm-uuid-LVM-AqU225ITWkMhxioP4SNN3vtZuUgxHr2CFmlfDeotkO8E502IVpeU2uNXBPoSaqMR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SfdbD4-DQeU-upZX-fFei-KrR8-spZ2-2tSadc', 'scsi-0QEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85', 'scsi-SQEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5Okb9-7wiI-AUzs-6xEc-WeRK-3xcZ-hI4vGp', 'scsi-0QEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087', 'scsi-SQEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323423 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9', 'scsi-SQEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f', 'dm-uuid-LVM-ScHrvNPr8qDyCeO4x5OiVfWTfDnUmC7SHZYBYTkTtP6D42HpChnXEPORdGms420C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323461 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9', 'dm-uuid-LVM-h6G5GzXBE45l6hxKniWXpOW1h9rmmErUiA7TRJwQlqicY2yDsAM0il518CF0D2fU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323504 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.323528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-r4SpB9-BCLC-eYHP-lMrq-wCSy-3vhG-ZRqCC7', 'scsi-0QEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c', 'scsi-SQEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323537 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DLl1Fq-KyrV-vfYI-RyK1-3lga-eE7q-zypSS7', 'scsi-0QEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79', 'scsi-SQEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110', 'scsi-SQEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323819 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.323828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323851 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-73KRxk-M406-MiXW-jgpk-jXkk-l5hx-WvE3Ux', 'scsi-0QEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af', 'scsi-SQEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4EBCt5-xfUc-O52C-4B6h-6o6d-D1FV-ne9RND', 'scsi-0QEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c', 'scsi-SQEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2', 'scsi-SQEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:51:38.323946 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.323954 | orchestrator | 2025-06-01 04:51:38.323962 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-01 04:51:38.323970 | orchestrator | Sunday 01 June 2025 04:41:27 +0000 (0:00:01.185) 0:00:30.367 *********** 2025-06-01 04:51:38.323978 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.323986 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.323994 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.324005 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.324013 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.324021 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.324028 | orchestrator | 2025-06-01 04:51:38.324036 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-01 04:51:38.324044 | orchestrator | Sunday 01 June 2025 04:41:29 +0000 (0:00:01.550) 0:00:31.917 *********** 2025-06-01 04:51:38.324052 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.324060 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.324067 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.324075 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.324083 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.324090 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.324098 | orchestrator | 2025-06-01 04:51:38.324106 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 04:51:38.324113 | orchestrator | Sunday 01 June 2025 04:41:29 +0000 (0:00:00.493) 0:00:32.410 *********** 2025-06-01 04:51:38.324126 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.324134 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.324142 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.324150 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324157 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324166 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324173 | orchestrator | 2025-06-01 04:51:38.324181 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 04:51:38.324209 | orchestrator | Sunday 01 June 2025 04:41:30 +0000 (0:00:00.607) 0:00:33.018 *********** 2025-06-01 04:51:38.324218 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.324226 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.324233 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.324241 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324249 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324256 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324264 | orchestrator | 2025-06-01 04:51:38.324272 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 04:51:38.324280 | orchestrator | Sunday 01 June 2025 04:41:31 +0000 (0:00:00.769) 0:00:33.787 *********** 2025-06-01 04:51:38.324287 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.324295 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.324303 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.324310 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324318 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324326 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324333 | orchestrator | 2025-06-01 04:51:38.324341 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 04:51:38.324349 | orchestrator | Sunday 01 June 2025 04:41:31 +0000 (0:00:00.705) 0:00:34.492 *********** 2025-06-01 04:51:38.324357 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.324364 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.324372 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.324380 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324388 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324395 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324403 | orchestrator | 2025-06-01 04:51:38.324411 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-01 04:51:38.324418 | orchestrator | Sunday 01 June 2025 04:41:32 +0000 (0:00:00.749) 0:00:35.242 *********** 2025-06-01 04:51:38.324426 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-01 04:51:38.324434 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:51:38.324442 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-01 04:51:38.324450 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 04:51:38.324457 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-01 04:51:38.324465 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-01 04:51:38.324473 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-01 04:51:38.324480 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-01 04:51:38.324488 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 04:51:38.324496 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-01 04:51:38.324504 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-01 04:51:38.324530 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-01 04:51:38.324538 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-01 04:51:38.324545 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-01 04:51:38.324553 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-01 04:51:38.324561 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-01 04:51:38.324568 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-01 04:51:38.324587 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-01 04:51:38.324595 | orchestrator | 2025-06-01 04:51:38.324603 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-01 04:51:38.324611 | orchestrator | Sunday 01 June 2025 04:41:35 +0000 (0:00:02.767) 0:00:38.009 *********** 2025-06-01 04:51:38.324619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 04:51:38.324627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 04:51:38.324634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 04:51:38.324642 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.324650 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-01 04:51:38.324658 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-01 04:51:38.324666 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-01 04:51:38.324673 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.324681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-01 04:51:38.324689 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-01 04:51:38.324697 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-01 04:51:38.324704 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.324716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 04:51:38.324724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 04:51:38.324732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 04:51:38.324740 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 04:51:38.324755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 04:51:38.324763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 04:51:38.324771 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 04:51:38.324786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 04:51:38.324794 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 04:51:38.324801 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324809 | orchestrator | 2025-06-01 04:51:38.324817 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-01 04:51:38.324825 | orchestrator | Sunday 01 June 2025 04:41:36 +0000 (0:00:00.671) 0:00:38.681 *********** 2025-06-01 04:51:38.324833 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.324840 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.324848 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.324856 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.324864 | orchestrator | 2025-06-01 04:51:38.324872 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 04:51:38.324881 | orchestrator | Sunday 01 June 2025 04:41:36 +0000 (0:00:00.807) 0:00:39.489 *********** 2025-06-01 04:51:38.324889 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324896 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324904 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324912 | orchestrator | 2025-06-01 04:51:38.324919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 04:51:38.324927 | orchestrator | Sunday 01 June 2025 04:41:37 +0000 (0:00:00.315) 0:00:39.805 *********** 2025-06-01 04:51:38.324935 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324943 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.324950 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.324958 | orchestrator | 2025-06-01 04:51:38.324966 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 04:51:38.324979 | orchestrator | Sunday 01 June 2025 04:41:37 +0000 (0:00:00.485) 0:00:40.291 *********** 2025-06-01 04:51:38.324987 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.324995 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.325002 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.325010 | orchestrator | 2025-06-01 04:51:38.325018 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 04:51:38.325026 | orchestrator | Sunday 01 June 2025 04:41:38 +0000 (0:00:00.408) 0:00:40.699 *********** 2025-06-01 04:51:38.325033 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.325041 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.325049 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.325057 | orchestrator | 2025-06-01 04:51:38.325064 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 04:51:38.325072 | orchestrator | Sunday 01 June 2025 04:41:38 +0000 (0:00:00.353) 0:00:41.052 *********** 2025-06-01 04:51:38.325080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.325088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.325096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.325103 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.325111 | orchestrator | 2025-06-01 04:51:38.325119 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 04:51:38.325127 | orchestrator | Sunday 01 June 2025 04:41:38 +0000 (0:00:00.433) 0:00:41.485 *********** 2025-06-01 04:51:38.325134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.325142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.325150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.325158 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.325165 | orchestrator | 2025-06-01 04:51:38.325173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 04:51:38.325185 | orchestrator | Sunday 01 June 2025 04:41:39 +0000 (0:00:00.348) 0:00:41.834 *********** 2025-06-01 04:51:38.325193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.325200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.325208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.325216 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.325223 | orchestrator | 2025-06-01 04:51:38.325231 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 04:51:38.325239 | orchestrator | Sunday 01 June 2025 04:41:40 +0000 (0:00:00.738) 0:00:42.572 *********** 2025-06-01 04:51:38.325247 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.325255 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.325262 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.325270 | orchestrator | 2025-06-01 04:51:38.325278 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 04:51:38.325286 | orchestrator | Sunday 01 June 2025 04:41:40 +0000 (0:00:00.723) 0:00:43.296 *********** 2025-06-01 04:51:38.325293 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 04:51:38.325301 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 04:51:38.325309 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 04:51:38.325317 | orchestrator | 2025-06-01 04:51:38.325325 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-01 04:51:38.325333 | orchestrator | Sunday 01 June 2025 04:41:41 +0000 (0:00:00.769) 0:00:44.065 *********** 2025-06-01 04:51:38.325344 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:51:38.325352 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:51:38.325360 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:51:38.325368 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-01 04:51:38.325381 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 04:51:38.325389 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 04:51:38.325396 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 04:51:38.325404 | orchestrator | 2025-06-01 04:51:38.325412 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-01 04:51:38.325419 | orchestrator | Sunday 01 June 2025 04:41:42 +0000 (0:00:00.946) 0:00:45.011 *********** 2025-06-01 04:51:38.325427 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:51:38.325435 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:51:38.325443 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:51:38.325450 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-01 04:51:38.325458 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 04:51:38.325466 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 04:51:38.325473 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 04:51:38.325481 | orchestrator | 2025-06-01 04:51:38.325489 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.325497 | orchestrator | Sunday 01 June 2025 04:41:44 +0000 (0:00:02.026) 0:00:47.038 *********** 2025-06-01 04:51:38.325505 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.325562 | orchestrator | 2025-06-01 04:51:38.325571 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.325578 | orchestrator | Sunday 01 June 2025 04:41:45 +0000 (0:00:01.179) 0:00:48.217 *********** 2025-06-01 04:51:38.325587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.325594 | orchestrator | 2025-06-01 04:51:38.325602 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.325610 | orchestrator | Sunday 01 June 2025 04:41:47 +0000 (0:00:01.491) 0:00:49.708 *********** 2025-06-01 04:51:38.325618 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.325626 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.325634 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.325642 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.325649 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.325657 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.325665 | orchestrator | 2025-06-01 04:51:38.325673 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.325680 | orchestrator | Sunday 01 June 2025 04:41:48 +0000 (0:00:00.868) 0:00:50.577 *********** 2025-06-01 04:51:38.325688 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.325696 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.325704 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.325712 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.325719 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.325727 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.325735 | orchestrator | 2025-06-01 04:51:38.325743 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.325751 | orchestrator | Sunday 01 June 2025 04:41:50 +0000 (0:00:02.024) 0:00:52.602 *********** 2025-06-01 04:51:38.325758 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.325766 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.325774 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.325787 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.325794 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.325804 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.325811 | orchestrator | 2025-06-01 04:51:38.325817 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.325824 | orchestrator | Sunday 01 June 2025 04:41:50 +0000 (0:00:00.948) 0:00:53.550 *********** 2025-06-01 04:51:38.325831 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.325837 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.325844 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.325850 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.325857 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.325863 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.325870 | orchestrator | 2025-06-01 04:51:38.325877 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.325883 | orchestrator | Sunday 01 June 2025 04:41:52 +0000 (0:00:01.442) 0:00:54.993 *********** 2025-06-01 04:51:38.325890 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.325896 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.325903 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.325909 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.325916 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.325923 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.325929 | orchestrator | 2025-06-01 04:51:38.325936 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.325942 | orchestrator | Sunday 01 June 2025 04:41:53 +0000 (0:00:00.715) 0:00:55.709 *********** 2025-06-01 04:51:38.325953 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.325960 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.325966 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.325973 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.325980 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.325986 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.325993 | orchestrator | 2025-06-01 04:51:38.325999 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.326006 | orchestrator | Sunday 01 June 2025 04:41:53 +0000 (0:00:00.573) 0:00:56.282 *********** 2025-06-01 04:51:38.326013 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326053 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326060 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326066 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.326073 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.326080 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.326087 | orchestrator | 2025-06-01 04:51:38.326093 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.326100 | orchestrator | Sunday 01 June 2025 04:41:54 +0000 (0:00:00.752) 0:00:57.035 *********** 2025-06-01 04:51:38.326107 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.326113 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.326120 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.326126 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326133 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326139 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326146 | orchestrator | 2025-06-01 04:51:38.326152 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.326159 | orchestrator | Sunday 01 June 2025 04:41:55 +0000 (0:00:01.422) 0:00:58.457 *********** 2025-06-01 04:51:38.326166 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.326172 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.326179 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.326185 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326192 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326198 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326205 | orchestrator | 2025-06-01 04:51:38.326211 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.326225 | orchestrator | Sunday 01 June 2025 04:41:57 +0000 (0:00:01.180) 0:00:59.638 *********** 2025-06-01 04:51:38.326232 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326239 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326245 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326252 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.326258 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.326265 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.326272 | orchestrator | 2025-06-01 04:51:38.326278 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.326285 | orchestrator | Sunday 01 June 2025 04:41:57 +0000 (0:00:00.512) 0:01:00.150 *********** 2025-06-01 04:51:38.326292 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.326298 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.326305 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.326311 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.326318 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.326325 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.326331 | orchestrator | 2025-06-01 04:51:38.326338 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.326344 | orchestrator | Sunday 01 June 2025 04:41:58 +0000 (0:00:00.614) 0:01:00.764 *********** 2025-06-01 04:51:38.326351 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326358 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326364 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326371 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326377 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326384 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326390 | orchestrator | 2025-06-01 04:51:38.326397 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.326403 | orchestrator | Sunday 01 June 2025 04:41:58 +0000 (0:00:00.501) 0:01:01.265 *********** 2025-06-01 04:51:38.326410 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326416 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326423 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326430 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326436 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326443 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326449 | orchestrator | 2025-06-01 04:51:38.326456 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.326462 | orchestrator | Sunday 01 June 2025 04:41:59 +0000 (0:00:00.653) 0:01:01.919 *********** 2025-06-01 04:51:38.326469 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326476 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326482 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326489 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326499 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326506 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326523 | orchestrator | 2025-06-01 04:51:38.326530 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.326537 | orchestrator | Sunday 01 June 2025 04:41:59 +0000 (0:00:00.552) 0:01:02.471 *********** 2025-06-01 04:51:38.326543 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326550 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326556 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326563 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.326570 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.326576 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.326583 | orchestrator | 2025-06-01 04:51:38.326589 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.326596 | orchestrator | Sunday 01 June 2025 04:42:00 +0000 (0:00:00.640) 0:01:03.112 *********** 2025-06-01 04:51:38.326603 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.326614 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.326620 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.326627 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.326633 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.326640 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.326646 | orchestrator | 2025-06-01 04:51:38.326653 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.326670 | orchestrator | Sunday 01 June 2025 04:42:01 +0000 (0:00:00.507) 0:01:03.619 *********** 2025-06-01 04:51:38.326677 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.326684 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.326690 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.326697 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.326703 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.326710 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.326716 | orchestrator | 2025-06-01 04:51:38.326723 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.326730 | orchestrator | Sunday 01 June 2025 04:42:01 +0000 (0:00:00.638) 0:01:04.258 *********** 2025-06-01 04:51:38.326736 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.326743 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.326749 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.326756 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326762 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326769 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326775 | orchestrator | 2025-06-01 04:51:38.326782 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.326789 | orchestrator | Sunday 01 June 2025 04:42:02 +0000 (0:00:00.660) 0:01:04.918 *********** 2025-06-01 04:51:38.326795 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.326802 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.326808 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.326815 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.326821 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.326828 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.326835 | orchestrator | 2025-06-01 04:51:38.326841 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-01 04:51:38.326848 | orchestrator | Sunday 01 June 2025 04:42:03 +0000 (0:00:01.323) 0:01:06.242 *********** 2025-06-01 04:51:38.326855 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.326861 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.326868 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.326874 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.326881 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.326887 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.326894 | orchestrator | 2025-06-01 04:51:38.326900 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-01 04:51:38.326907 | orchestrator | Sunday 01 June 2025 04:42:05 +0000 (0:00:01.769) 0:01:08.011 *********** 2025-06-01 04:51:38.326914 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.326920 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.326927 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.326933 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.326940 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.326946 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.326953 | orchestrator | 2025-06-01 04:51:38.326960 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-01 04:51:38.326966 | orchestrator | Sunday 01 June 2025 04:42:07 +0000 (0:00:01.888) 0:01:09.900 *********** 2025-06-01 04:51:38.326973 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.326980 | orchestrator | 2025-06-01 04:51:38.326986 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-01 04:51:38.326998 | orchestrator | Sunday 01 June 2025 04:42:08 +0000 (0:00:01.090) 0:01:10.990 *********** 2025-06-01 04:51:38.327004 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327011 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327017 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327024 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327031 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327037 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327044 | orchestrator | 2025-06-01 04:51:38.327050 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-01 04:51:38.327057 | orchestrator | Sunday 01 June 2025 04:42:09 +0000 (0:00:00.757) 0:01:11.748 *********** 2025-06-01 04:51:38.327064 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327070 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327077 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327083 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327090 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327096 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327103 | orchestrator | 2025-06-01 04:51:38.327109 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-01 04:51:38.327116 | orchestrator | Sunday 01 June 2025 04:42:09 +0000 (0:00:00.545) 0:01:12.293 *********** 2025-06-01 04:51:38.327123 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 04:51:38.327133 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 04:51:38.327140 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 04:51:38.327146 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 04:51:38.327153 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 04:51:38.327159 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 04:51:38.327166 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 04:51:38.327172 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 04:51:38.327179 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 04:51:38.327186 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 04:51:38.327192 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 04:51:38.327199 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 04:51:38.327205 | orchestrator | 2025-06-01 04:51:38.327215 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-01 04:51:38.327222 | orchestrator | Sunday 01 June 2025 04:42:11 +0000 (0:00:01.450) 0:01:13.743 *********** 2025-06-01 04:51:38.327229 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.327235 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.327242 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.327249 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.327255 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.327262 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.327268 | orchestrator | 2025-06-01 04:51:38.327275 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-01 04:51:38.327281 | orchestrator | Sunday 01 June 2025 04:42:12 +0000 (0:00:00.858) 0:01:14.602 *********** 2025-06-01 04:51:38.327288 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327294 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327301 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327307 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327314 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327321 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327332 | orchestrator | 2025-06-01 04:51:38.327339 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-01 04:51:38.327345 | orchestrator | Sunday 01 June 2025 04:42:12 +0000 (0:00:00.802) 0:01:15.405 *********** 2025-06-01 04:51:38.327352 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327359 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327365 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327372 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327378 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327385 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327391 | orchestrator | 2025-06-01 04:51:38.327398 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-01 04:51:38.327405 | orchestrator | Sunday 01 June 2025 04:42:13 +0000 (0:00:00.541) 0:01:15.946 *********** 2025-06-01 04:51:38.327411 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327418 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327424 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327431 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327437 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327444 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327451 | orchestrator | 2025-06-01 04:51:38.327457 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-01 04:51:38.327464 | orchestrator | Sunday 01 June 2025 04:42:14 +0000 (0:00:00.732) 0:01:16.679 *********** 2025-06-01 04:51:38.327471 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.327477 | orchestrator | 2025-06-01 04:51:38.327484 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-01 04:51:38.327490 | orchestrator | Sunday 01 June 2025 04:42:15 +0000 (0:00:01.146) 0:01:17.826 *********** 2025-06-01 04:51:38.327497 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.327504 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.327522 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.327529 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.327536 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.327542 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.327549 | orchestrator | 2025-06-01 04:51:38.327555 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-01 04:51:38.327562 | orchestrator | Sunday 01 June 2025 04:43:09 +0000 (0:00:54.325) 0:02:12.151 *********** 2025-06-01 04:51:38.327569 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 04:51:38.327575 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 04:51:38.327582 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 04:51:38.327588 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327595 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 04:51:38.327601 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 04:51:38.327608 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 04:51:38.327615 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327621 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 04:51:38.327631 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 04:51:38.327638 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 04:51:38.327644 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327651 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 04:51:38.327657 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 04:51:38.327669 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 04:51:38.327675 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327682 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 04:51:38.327689 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 04:51:38.327695 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 04:51:38.327702 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327708 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 04:51:38.327715 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 04:51:38.327721 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 04:51:38.327731 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327738 | orchestrator | 2025-06-01 04:51:38.327745 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-01 04:51:38.327752 | orchestrator | Sunday 01 June 2025 04:43:10 +0000 (0:00:00.809) 0:02:12.960 *********** 2025-06-01 04:51:38.327758 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327765 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327771 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327778 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327784 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327791 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327797 | orchestrator | 2025-06-01 04:51:38.327804 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-01 04:51:38.327811 | orchestrator | Sunday 01 June 2025 04:43:10 +0000 (0:00:00.526) 0:02:13.487 *********** 2025-06-01 04:51:38.327817 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327824 | orchestrator | 2025-06-01 04:51:38.327831 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-01 04:51:38.327837 | orchestrator | Sunday 01 June 2025 04:43:11 +0000 (0:00:00.137) 0:02:13.624 *********** 2025-06-01 04:51:38.327844 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327850 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327857 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327863 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327870 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327876 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327883 | orchestrator | 2025-06-01 04:51:38.327889 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-01 04:51:38.327896 | orchestrator | Sunday 01 June 2025 04:43:11 +0000 (0:00:00.806) 0:02:14.431 *********** 2025-06-01 04:51:38.327902 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327909 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327916 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327922 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327929 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327935 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.327942 | orchestrator | 2025-06-01 04:51:38.327948 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-01 04:51:38.327955 | orchestrator | Sunday 01 June 2025 04:43:12 +0000 (0:00:00.558) 0:02:14.990 *********** 2025-06-01 04:51:38.327962 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.327968 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.327975 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.327981 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.327988 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.327994 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328001 | orchestrator | 2025-06-01 04:51:38.328008 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-01 04:51:38.328014 | orchestrator | Sunday 01 June 2025 04:43:13 +0000 (0:00:00.769) 0:02:15.759 *********** 2025-06-01 04:51:38.328025 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.328032 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.328038 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.328045 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.328051 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.328058 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.328064 | orchestrator | 2025-06-01 04:51:38.328071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-01 04:51:38.328077 | orchestrator | Sunday 01 June 2025 04:43:15 +0000 (0:00:02.006) 0:02:17.766 *********** 2025-06-01 04:51:38.328084 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.328091 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.328097 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.328103 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.328110 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.328116 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.328123 | orchestrator | 2025-06-01 04:51:38.328129 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-01 04:51:38.328136 | orchestrator | Sunday 01 June 2025 04:43:16 +0000 (0:00:01.072) 0:02:18.838 *********** 2025-06-01 04:51:38.328143 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.328150 | orchestrator | 2025-06-01 04:51:38.328157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-01 04:51:38.328164 | orchestrator | Sunday 01 June 2025 04:43:17 +0000 (0:00:01.339) 0:02:20.177 *********** 2025-06-01 04:51:38.328170 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328177 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328183 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328193 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328200 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328206 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328213 | orchestrator | 2025-06-01 04:51:38.328219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-01 04:51:38.328226 | orchestrator | Sunday 01 June 2025 04:43:18 +0000 (0:00:00.763) 0:02:20.941 *********** 2025-06-01 04:51:38.328233 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328239 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328246 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328252 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328259 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328265 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328272 | orchestrator | 2025-06-01 04:51:38.328278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-01 04:51:38.328285 | orchestrator | Sunday 01 June 2025 04:43:19 +0000 (0:00:01.004) 0:02:21.946 *********** 2025-06-01 04:51:38.328292 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328298 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328304 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328311 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328318 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328324 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328331 | orchestrator | 2025-06-01 04:51:38.328341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-01 04:51:38.328347 | orchestrator | Sunday 01 June 2025 04:43:19 +0000 (0:00:00.610) 0:02:22.556 *********** 2025-06-01 04:51:38.328354 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328361 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328367 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328374 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328380 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328387 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328398 | orchestrator | 2025-06-01 04:51:38.328404 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-01 04:51:38.328411 | orchestrator | Sunday 01 June 2025 04:43:20 +0000 (0:00:00.764) 0:02:23.321 *********** 2025-06-01 04:51:38.328418 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328424 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328431 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328437 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328443 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328450 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328456 | orchestrator | 2025-06-01 04:51:38.328463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-01 04:51:38.328470 | orchestrator | Sunday 01 June 2025 04:43:21 +0000 (0:00:00.657) 0:02:23.979 *********** 2025-06-01 04:51:38.328476 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328483 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328489 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328496 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328502 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328509 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328527 | orchestrator | 2025-06-01 04:51:38.328533 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-01 04:51:38.328540 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.659) 0:02:24.638 *********** 2025-06-01 04:51:38.328547 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328553 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328560 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328566 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328573 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328579 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328586 | orchestrator | 2025-06-01 04:51:38.328593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-01 04:51:38.328599 | orchestrator | Sunday 01 June 2025 04:43:22 +0000 (0:00:00.538) 0:02:25.177 *********** 2025-06-01 04:51:38.328606 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.328612 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.328619 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.328625 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.328632 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.328638 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.328645 | orchestrator | 2025-06-01 04:51:38.328651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-01 04:51:38.328658 | orchestrator | Sunday 01 June 2025 04:43:23 +0000 (0:00:00.806) 0:02:25.983 *********** 2025-06-01 04:51:38.328665 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.328671 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.328678 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.328684 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.328691 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.328697 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.328704 | orchestrator | 2025-06-01 04:51:38.328711 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-01 04:51:38.328717 | orchestrator | Sunday 01 June 2025 04:43:24 +0000 (0:00:01.071) 0:02:27.055 *********** 2025-06-01 04:51:38.328724 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.328730 | orchestrator | 2025-06-01 04:51:38.328737 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-01 04:51:38.328744 | orchestrator | Sunday 01 June 2025 04:43:25 +0000 (0:00:01.145) 0:02:28.200 *********** 2025-06-01 04:51:38.328750 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-01 04:51:38.328757 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-01 04:51:38.328768 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-01 04:51:38.328775 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-01 04:51:38.328781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-01 04:51:38.328788 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-01 04:51:38.328798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-01 04:51:38.328804 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-01 04:51:38.328811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-01 04:51:38.328817 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-01 04:51:38.328824 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-01 04:51:38.328831 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-01 04:51:38.328837 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-01 04:51:38.328844 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-01 04:51:38.328850 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-01 04:51:38.328857 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-01 04:51:38.328863 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-01 04:51:38.328870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-01 04:51:38.328876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-01 04:51:38.328883 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-01 04:51:38.328889 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-01 04:51:38.328899 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-01 04:51:38.328907 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-01 04:51:38.328913 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-01 04:51:38.328920 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-01 04:51:38.328926 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-01 04:51:38.328933 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-01 04:51:38.328939 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-01 04:51:38.328946 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-01 04:51:38.328952 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-01 04:51:38.328959 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-01 04:51:38.328965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-01 04:51:38.328972 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-01 04:51:38.328978 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-01 04:51:38.328985 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-01 04:51:38.328991 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-01 04:51:38.328998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-01 04:51:38.329004 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-01 04:51:38.329011 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-01 04:51:38.329018 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-01 04:51:38.329024 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-01 04:51:38.329031 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-01 04:51:38.329037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-01 04:51:38.329044 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-01 04:51:38.329050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-01 04:51:38.329057 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-01 04:51:38.329067 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-01 04:51:38.329074 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 04:51:38.329080 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 04:51:38.329090 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 04:51:38.329100 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 04:51:38.329109 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-01 04:51:38.329120 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 04:51:38.329132 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 04:51:38.329142 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 04:51:38.329158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 04:51:38.329173 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 04:51:38.329184 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 04:51:38.329195 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 04:51:38.329206 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 04:51:38.329217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 04:51:38.329227 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 04:51:38.329238 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 04:51:38.329249 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 04:51:38.329260 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 04:51:38.329271 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 04:51:38.329288 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 04:51:38.329298 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 04:51:38.329310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 04:51:38.329321 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 04:51:38.329333 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 04:51:38.329344 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 04:51:38.329355 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 04:51:38.329367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 04:51:38.329379 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 04:51:38.329388 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 04:51:38.329395 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 04:51:38.329401 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 04:51:38.329408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 04:51:38.329421 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 04:51:38.329428 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 04:51:38.329434 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 04:51:38.329441 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 04:51:38.329447 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-01 04:51:38.329454 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-01 04:51:38.329460 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-01 04:51:38.329476 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-01 04:51:38.329483 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-01 04:51:38.329489 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 04:51:38.329496 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-01 04:51:38.329502 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-01 04:51:38.329526 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-01 04:51:38.329534 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-01 04:51:38.329541 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-01 04:51:38.329547 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-01 04:51:38.329554 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-01 04:51:38.329560 | orchestrator | 2025-06-01 04:51:38.329567 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-01 04:51:38.329573 | orchestrator | Sunday 01 June 2025 04:43:31 +0000 (0:00:06.247) 0:02:34.447 *********** 2025-06-01 04:51:38.329580 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.329586 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.329593 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.329600 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.329607 | orchestrator | 2025-06-01 04:51:38.329613 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-01 04:51:38.329620 | orchestrator | Sunday 01 June 2025 04:43:32 +0000 (0:00:01.014) 0:02:35.461 *********** 2025-06-01 04:51:38.329627 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.329633 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.329640 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.329647 | orchestrator | 2025-06-01 04:51:38.329653 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-01 04:51:38.329660 | orchestrator | Sunday 01 June 2025 04:43:33 +0000 (0:00:00.725) 0:02:36.187 *********** 2025-06-01 04:51:38.329667 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.329673 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.329680 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.329687 | orchestrator | 2025-06-01 04:51:38.329693 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-01 04:51:38.329700 | orchestrator | Sunday 01 June 2025 04:43:35 +0000 (0:00:01.508) 0:02:37.695 *********** 2025-06-01 04:51:38.329706 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.329713 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.329720 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.329726 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.329733 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.329739 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.329746 | orchestrator | 2025-06-01 04:51:38.329756 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-01 04:51:38.329763 | orchestrator | Sunday 01 June 2025 04:43:35 +0000 (0:00:00.572) 0:02:38.267 *********** 2025-06-01 04:51:38.329769 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.329776 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.329791 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.329798 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.329804 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.329811 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.329818 | orchestrator | 2025-06-01 04:51:38.329824 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-01 04:51:38.329831 | orchestrator | Sunday 01 June 2025 04:43:36 +0000 (0:00:00.716) 0:02:38.984 *********** 2025-06-01 04:51:38.329838 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.329844 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.329851 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.329858 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.329864 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.329871 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.329877 | orchestrator | 2025-06-01 04:51:38.329884 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-01 04:51:38.329891 | orchestrator | Sunday 01 June 2025 04:43:36 +0000 (0:00:00.568) 0:02:39.552 *********** 2025-06-01 04:51:38.329897 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.329904 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.329915 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.329921 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.329928 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.329934 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.329941 | orchestrator | 2025-06-01 04:51:38.329948 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-01 04:51:38.329954 | orchestrator | Sunday 01 June 2025 04:43:37 +0000 (0:00:00.668) 0:02:40.221 *********** 2025-06-01 04:51:38.329961 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.329968 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.329974 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.329981 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.329987 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.329995 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330006 | orchestrator | 2025-06-01 04:51:38.330049 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-01 04:51:38.330064 | orchestrator | Sunday 01 June 2025 04:43:38 +0000 (0:00:00.606) 0:02:40.827 *********** 2025-06-01 04:51:38.330074 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330085 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330096 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330109 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.330117 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.330124 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330130 | orchestrator | 2025-06-01 04:51:38.330137 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-01 04:51:38.330144 | orchestrator | Sunday 01 June 2025 04:43:39 +0000 (0:00:00.852) 0:02:41.680 *********** 2025-06-01 04:51:38.330150 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330157 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330163 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330170 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.330176 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.330183 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330189 | orchestrator | 2025-06-01 04:51:38.330196 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-01 04:51:38.330203 | orchestrator | Sunday 01 June 2025 04:43:39 +0000 (0:00:00.664) 0:02:42.344 *********** 2025-06-01 04:51:38.330210 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330216 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330223 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330236 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.330242 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.330249 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330255 | orchestrator | 2025-06-01 04:51:38.330262 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-01 04:51:38.330272 | orchestrator | Sunday 01 June 2025 04:43:40 +0000 (0:00:01.021) 0:02:43.366 *********** 2025-06-01 04:51:38.330283 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330293 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330304 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330315 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.330325 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.330334 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.330343 | orchestrator | 2025-06-01 04:51:38.330352 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-01 04:51:38.330362 | orchestrator | Sunday 01 June 2025 04:43:43 +0000 (0:00:02.922) 0:02:46.289 *********** 2025-06-01 04:51:38.330372 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330381 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330391 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330401 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.330411 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.330422 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.330433 | orchestrator | 2025-06-01 04:51:38.330443 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-01 04:51:38.330454 | orchestrator | Sunday 01 June 2025 04:43:44 +0000 (0:00:00.703) 0:02:46.992 *********** 2025-06-01 04:51:38.330464 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330474 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330485 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330495 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.330505 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.330569 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.330580 | orchestrator | 2025-06-01 04:51:38.330592 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-01 04:51:38.330602 | orchestrator | Sunday 01 June 2025 04:43:45 +0000 (0:00:00.616) 0:02:47.609 *********** 2025-06-01 04:51:38.330613 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330691 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330705 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330712 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.330719 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.330725 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330732 | orchestrator | 2025-06-01 04:51:38.330739 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-01 04:51:38.330746 | orchestrator | Sunday 01 June 2025 04:43:45 +0000 (0:00:00.870) 0:02:48.480 *********** 2025-06-01 04:51:38.330752 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330759 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330765 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330772 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.330779 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.330786 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.330792 | orchestrator | 2025-06-01 04:51:38.330799 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-01 04:51:38.330821 | orchestrator | Sunday 01 June 2025 04:43:46 +0000 (0:00:00.561) 0:02:49.041 *********** 2025-06-01 04:51:38.330828 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330835 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330849 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330857 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-01 04:51:38.330865 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-01 04:51:38.330872 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.330879 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-01 04:51:38.330886 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-01 04:51:38.330893 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.330900 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-01 04:51:38.330907 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-01 04:51:38.330914 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330921 | orchestrator | 2025-06-01 04:51:38.330927 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-01 04:51:38.330934 | orchestrator | Sunday 01 June 2025 04:43:47 +0000 (0:00:00.901) 0:02:49.942 *********** 2025-06-01 04:51:38.330940 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.330947 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.330954 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.330960 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.330967 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.330973 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.330980 | orchestrator | 2025-06-01 04:51:38.330987 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-01 04:51:38.330993 | orchestrator | Sunday 01 June 2025 04:43:47 +0000 (0:00:00.481) 0:02:50.424 *********** 2025-06-01 04:51:38.331000 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331007 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331013 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331020 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331026 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.331033 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.331040 | orchestrator | 2025-06-01 04:51:38.331046 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 04:51:38.331057 | orchestrator | Sunday 01 June 2025 04:43:48 +0000 (0:00:00.539) 0:02:50.964 *********** 2025-06-01 04:51:38.331063 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331070 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331076 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331087 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331093 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.331099 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.331105 | orchestrator | 2025-06-01 04:51:38.331111 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 04:51:38.331117 | orchestrator | Sunday 01 June 2025 04:43:48 +0000 (0:00:00.550) 0:02:51.514 *********** 2025-06-01 04:51:38.331123 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331130 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331136 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331142 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331148 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.331154 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.331160 | orchestrator | 2025-06-01 04:51:38.331166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 04:51:38.331172 | orchestrator | Sunday 01 June 2025 04:43:49 +0000 (0:00:00.590) 0:02:52.104 *********** 2025-06-01 04:51:38.331178 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331185 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331191 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331201 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331208 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.331214 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.331220 | orchestrator | 2025-06-01 04:51:38.331226 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 04:51:38.331232 | orchestrator | Sunday 01 June 2025 04:43:50 +0000 (0:00:00.476) 0:02:52.580 *********** 2025-06-01 04:51:38.331238 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331244 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331251 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331257 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.331263 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.331269 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.331275 | orchestrator | 2025-06-01 04:51:38.331281 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 04:51:38.331288 | orchestrator | Sunday 01 June 2025 04:43:50 +0000 (0:00:00.871) 0:02:53.452 *********** 2025-06-01 04:51:38.331294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 04:51:38.331300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 04:51:38.331307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 04:51:38.331313 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331319 | orchestrator | 2025-06-01 04:51:38.331325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 04:51:38.331332 | orchestrator | Sunday 01 June 2025 04:43:51 +0000 (0:00:00.340) 0:02:53.792 *********** 2025-06-01 04:51:38.331338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 04:51:38.331344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 04:51:38.331350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 04:51:38.331356 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331363 | orchestrator | 2025-06-01 04:51:38.331369 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 04:51:38.331375 | orchestrator | Sunday 01 June 2025 04:43:51 +0000 (0:00:00.396) 0:02:54.189 *********** 2025-06-01 04:51:38.331381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 04:51:38.331387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 04:51:38.331394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 04:51:38.331400 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331406 | orchestrator | 2025-06-01 04:51:38.331412 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 04:51:38.331422 | orchestrator | Sunday 01 June 2025 04:43:51 +0000 (0:00:00.306) 0:02:54.496 *********** 2025-06-01 04:51:38.331428 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331434 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331441 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331447 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.331453 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.331459 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.331465 | orchestrator | 2025-06-01 04:51:38.331471 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 04:51:38.331478 | orchestrator | Sunday 01 June 2025 04:43:52 +0000 (0:00:00.517) 0:02:55.013 *********** 2025-06-01 04:51:38.331484 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-01 04:51:38.331490 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331496 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-01 04:51:38.331502 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331509 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-01 04:51:38.331529 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331535 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 04:51:38.331542 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 04:51:38.331548 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 04:51:38.331554 | orchestrator | 2025-06-01 04:51:38.331560 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-01 04:51:38.331566 | orchestrator | Sunday 01 June 2025 04:43:54 +0000 (0:00:01.720) 0:02:56.733 *********** 2025-06-01 04:51:38.331572 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.331578 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.331584 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.331590 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.331597 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.331603 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.331609 | orchestrator | 2025-06-01 04:51:38.331615 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 04:51:38.331621 | orchestrator | Sunday 01 June 2025 04:43:56 +0000 (0:00:02.112) 0:02:58.845 *********** 2025-06-01 04:51:38.331631 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.331637 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.331643 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.331649 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.331655 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.331661 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.331667 | orchestrator | 2025-06-01 04:51:38.331673 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-01 04:51:38.331680 | orchestrator | Sunday 01 June 2025 04:43:57 +0000 (0:00:00.827) 0:02:59.673 *********** 2025-06-01 04:51:38.331686 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331692 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.331698 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.331704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.331711 | orchestrator | 2025-06-01 04:51:38.331717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-01 04:51:38.331723 | orchestrator | Sunday 01 June 2025 04:43:57 +0000 (0:00:00.858) 0:03:00.532 *********** 2025-06-01 04:51:38.331729 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.331735 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.331741 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.331747 | orchestrator | 2025-06-01 04:51:38.331754 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-01 04:51:38.331763 | orchestrator | Sunday 01 June 2025 04:43:58 +0000 (0:00:00.265) 0:03:00.797 *********** 2025-06-01 04:51:38.331770 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.331776 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.331787 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.331793 | orchestrator | 2025-06-01 04:51:38.331799 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-01 04:51:38.331806 | orchestrator | Sunday 01 June 2025 04:43:59 +0000 (0:00:01.254) 0:03:02.052 *********** 2025-06-01 04:51:38.331812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 04:51:38.331818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 04:51:38.331824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 04:51:38.331830 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331837 | orchestrator | 2025-06-01 04:51:38.331843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-01 04:51:38.331849 | orchestrator | Sunday 01 June 2025 04:44:00 +0000 (0:00:00.538) 0:03:02.590 *********** 2025-06-01 04:51:38.331855 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.331861 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.331867 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.331873 | orchestrator | 2025-06-01 04:51:38.331880 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-01 04:51:38.331886 | orchestrator | Sunday 01 June 2025 04:44:00 +0000 (0:00:00.305) 0:03:02.896 *********** 2025-06-01 04:51:38.331892 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.331898 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.331904 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.331910 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.331917 | orchestrator | 2025-06-01 04:51:38.331923 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-01 04:51:38.331929 | orchestrator | Sunday 01 June 2025 04:44:01 +0000 (0:00:00.852) 0:03:03.749 *********** 2025-06-01 04:51:38.331935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.331941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.331948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.331954 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331960 | orchestrator | 2025-06-01 04:51:38.331966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-01 04:51:38.331972 | orchestrator | Sunday 01 June 2025 04:44:01 +0000 (0:00:00.393) 0:03:04.142 *********** 2025-06-01 04:51:38.331978 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.331985 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.331991 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.331997 | orchestrator | 2025-06-01 04:51:38.332003 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-01 04:51:38.332009 | orchestrator | Sunday 01 June 2025 04:44:01 +0000 (0:00:00.354) 0:03:04.496 *********** 2025-06-01 04:51:38.332015 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332022 | orchestrator | 2025-06-01 04:51:38.332028 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-01 04:51:38.332034 | orchestrator | Sunday 01 June 2025 04:44:02 +0000 (0:00:00.221) 0:03:04.718 *********** 2025-06-01 04:51:38.332040 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332046 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.332052 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.332058 | orchestrator | 2025-06-01 04:51:38.332065 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-01 04:51:38.332071 | orchestrator | Sunday 01 June 2025 04:44:02 +0000 (0:00:00.280) 0:03:04.998 *********** 2025-06-01 04:51:38.332077 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332083 | orchestrator | 2025-06-01 04:51:38.332089 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-01 04:51:38.332096 | orchestrator | Sunday 01 June 2025 04:44:02 +0000 (0:00:00.200) 0:03:05.199 *********** 2025-06-01 04:51:38.332106 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332112 | orchestrator | 2025-06-01 04:51:38.332118 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-01 04:51:38.332124 | orchestrator | Sunday 01 June 2025 04:44:02 +0000 (0:00:00.216) 0:03:05.416 *********** 2025-06-01 04:51:38.332130 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332137 | orchestrator | 2025-06-01 04:51:38.332143 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-01 04:51:38.332149 | orchestrator | Sunday 01 June 2025 04:44:03 +0000 (0:00:00.404) 0:03:05.820 *********** 2025-06-01 04:51:38.332158 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332164 | orchestrator | 2025-06-01 04:51:38.332170 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-01 04:51:38.332177 | orchestrator | Sunday 01 June 2025 04:44:03 +0000 (0:00:00.219) 0:03:06.040 *********** 2025-06-01 04:51:38.332183 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332189 | orchestrator | 2025-06-01 04:51:38.332195 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-01 04:51:38.332201 | orchestrator | Sunday 01 June 2025 04:44:03 +0000 (0:00:00.207) 0:03:06.247 *********** 2025-06-01 04:51:38.332207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.332214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.332220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.332226 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332232 | orchestrator | 2025-06-01 04:51:38.332238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-01 04:51:38.332245 | orchestrator | Sunday 01 June 2025 04:44:04 +0000 (0:00:00.409) 0:03:06.657 *********** 2025-06-01 04:51:38.332251 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332257 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.332263 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.332269 | orchestrator | 2025-06-01 04:51:38.332278 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-01 04:51:38.332285 | orchestrator | Sunday 01 June 2025 04:44:04 +0000 (0:00:00.302) 0:03:06.959 *********** 2025-06-01 04:51:38.332291 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332297 | orchestrator | 2025-06-01 04:51:38.332303 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-01 04:51:38.332310 | orchestrator | Sunday 01 June 2025 04:44:04 +0000 (0:00:00.196) 0:03:07.156 *********** 2025-06-01 04:51:38.332316 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332322 | orchestrator | 2025-06-01 04:51:38.332328 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-01 04:51:38.332334 | orchestrator | Sunday 01 June 2025 04:44:04 +0000 (0:00:00.242) 0:03:07.399 *********** 2025-06-01 04:51:38.332340 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.332346 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.332353 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.332359 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.332365 | orchestrator | 2025-06-01 04:51:38.332371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-01 04:51:38.332377 | orchestrator | Sunday 01 June 2025 04:44:05 +0000 (0:00:01.102) 0:03:08.502 *********** 2025-06-01 04:51:38.332384 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.332390 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.332396 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.332402 | orchestrator | 2025-06-01 04:51:38.332408 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-01 04:51:38.332415 | orchestrator | Sunday 01 June 2025 04:44:06 +0000 (0:00:00.336) 0:03:08.839 *********** 2025-06-01 04:51:38.332421 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.332427 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.332437 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.332444 | orchestrator | 2025-06-01 04:51:38.332450 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-01 04:51:38.332456 | orchestrator | Sunday 01 June 2025 04:44:07 +0000 (0:00:01.123) 0:03:09.963 *********** 2025-06-01 04:51:38.332462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.332468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.332474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.332481 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332487 | orchestrator | 2025-06-01 04:51:38.332493 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-01 04:51:38.332499 | orchestrator | Sunday 01 June 2025 04:44:08 +0000 (0:00:01.015) 0:03:10.978 *********** 2025-06-01 04:51:38.332505 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.332527 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.332533 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.332539 | orchestrator | 2025-06-01 04:51:38.332546 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-01 04:51:38.332552 | orchestrator | Sunday 01 June 2025 04:44:08 +0000 (0:00:00.316) 0:03:11.295 *********** 2025-06-01 04:51:38.332558 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.332564 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.332570 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.332577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.332583 | orchestrator | 2025-06-01 04:51:38.332589 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-01 04:51:38.332595 | orchestrator | Sunday 01 June 2025 04:44:09 +0000 (0:00:01.014) 0:03:12.309 *********** 2025-06-01 04:51:38.332601 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.332607 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.332614 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.332620 | orchestrator | 2025-06-01 04:51:38.332626 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-01 04:51:38.332632 | orchestrator | Sunday 01 June 2025 04:44:10 +0000 (0:00:00.372) 0:03:12.682 *********** 2025-06-01 04:51:38.332638 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.332644 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.332650 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.332657 | orchestrator | 2025-06-01 04:51:38.332663 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-01 04:51:38.332669 | orchestrator | Sunday 01 June 2025 04:44:11 +0000 (0:00:01.521) 0:03:14.204 *********** 2025-06-01 04:51:38.332675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.332684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.332691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.332697 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332703 | orchestrator | 2025-06-01 04:51:38.332709 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-01 04:51:38.332715 | orchestrator | Sunday 01 June 2025 04:44:12 +0000 (0:00:00.880) 0:03:15.084 *********** 2025-06-01 04:51:38.332722 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.332728 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.332734 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.332740 | orchestrator | 2025-06-01 04:51:38.332746 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-01 04:51:38.332752 | orchestrator | Sunday 01 June 2025 04:44:12 +0000 (0:00:00.349) 0:03:15.433 *********** 2025-06-01 04:51:38.332758 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.332764 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.332771 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.332783 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332789 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.332795 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.332801 | orchestrator | 2025-06-01 04:51:38.332808 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-01 04:51:38.332814 | orchestrator | Sunday 01 June 2025 04:44:13 +0000 (0:00:00.819) 0:03:16.252 *********** 2025-06-01 04:51:38.332824 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.332830 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.332836 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.332842 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.332849 | orchestrator | 2025-06-01 04:51:38.332855 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-01 04:51:38.332861 | orchestrator | Sunday 01 June 2025 04:44:14 +0000 (0:00:01.036) 0:03:17.289 *********** 2025-06-01 04:51:38.332867 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.332873 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.332879 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.332885 | orchestrator | 2025-06-01 04:51:38.332892 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-01 04:51:38.332898 | orchestrator | Sunday 01 June 2025 04:44:15 +0000 (0:00:00.373) 0:03:17.662 *********** 2025-06-01 04:51:38.332904 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.332910 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.332916 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.332923 | orchestrator | 2025-06-01 04:51:38.332929 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-01 04:51:38.332935 | orchestrator | Sunday 01 June 2025 04:44:16 +0000 (0:00:01.245) 0:03:18.907 *********** 2025-06-01 04:51:38.332941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 04:51:38.332947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 04:51:38.332953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 04:51:38.332960 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.332966 | orchestrator | 2025-06-01 04:51:38.332972 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-01 04:51:38.332978 | orchestrator | Sunday 01 June 2025 04:44:17 +0000 (0:00:00.685) 0:03:19.593 *********** 2025-06-01 04:51:38.332984 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.332990 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.332997 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333003 | orchestrator | 2025-06-01 04:51:38.333009 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-01 04:51:38.333015 | orchestrator | 2025-06-01 04:51:38.333021 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.333027 | orchestrator | Sunday 01 June 2025 04:44:17 +0000 (0:00:00.615) 0:03:20.208 *********** 2025-06-01 04:51:38.333034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.333040 | orchestrator | 2025-06-01 04:51:38.333046 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.333052 | orchestrator | Sunday 01 June 2025 04:44:18 +0000 (0:00:00.394) 0:03:20.603 *********** 2025-06-01 04:51:38.333058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.333065 | orchestrator | 2025-06-01 04:51:38.333071 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.333077 | orchestrator | Sunday 01 June 2025 04:44:18 +0000 (0:00:00.480) 0:03:21.083 *********** 2025-06-01 04:51:38.333083 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333089 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333095 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333106 | orchestrator | 2025-06-01 04:51:38.333112 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.333118 | orchestrator | Sunday 01 June 2025 04:44:19 +0000 (0:00:00.616) 0:03:21.700 *********** 2025-06-01 04:51:38.333124 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333130 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333137 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333143 | orchestrator | 2025-06-01 04:51:38.333149 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.333155 | orchestrator | Sunday 01 June 2025 04:44:19 +0000 (0:00:00.278) 0:03:21.979 *********** 2025-06-01 04:51:38.333161 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333167 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333173 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333179 | orchestrator | 2025-06-01 04:51:38.333186 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.333192 | orchestrator | Sunday 01 June 2025 04:44:19 +0000 (0:00:00.256) 0:03:22.235 *********** 2025-06-01 04:51:38.333198 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333204 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333213 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333219 | orchestrator | 2025-06-01 04:51:38.333225 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.333232 | orchestrator | Sunday 01 June 2025 04:44:20 +0000 (0:00:00.462) 0:03:22.698 *********** 2025-06-01 04:51:38.333238 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333244 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333250 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333256 | orchestrator | 2025-06-01 04:51:38.333262 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.333269 | orchestrator | Sunday 01 June 2025 04:44:20 +0000 (0:00:00.790) 0:03:23.488 *********** 2025-06-01 04:51:38.333275 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333281 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333287 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333293 | orchestrator | 2025-06-01 04:51:38.333299 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.333305 | orchestrator | Sunday 01 June 2025 04:44:21 +0000 (0:00:00.292) 0:03:23.780 *********** 2025-06-01 04:51:38.333312 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333318 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333324 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333330 | orchestrator | 2025-06-01 04:51:38.333336 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.333346 | orchestrator | Sunday 01 June 2025 04:44:21 +0000 (0:00:00.318) 0:03:24.099 *********** 2025-06-01 04:51:38.333352 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333358 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333365 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333371 | orchestrator | 2025-06-01 04:51:38.333377 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.333383 | orchestrator | Sunday 01 June 2025 04:44:22 +0000 (0:00:01.200) 0:03:25.299 *********** 2025-06-01 04:51:38.333389 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333395 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333401 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333407 | orchestrator | 2025-06-01 04:51:38.333414 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.333420 | orchestrator | Sunday 01 June 2025 04:44:23 +0000 (0:00:00.793) 0:03:26.092 *********** 2025-06-01 04:51:38.333426 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333432 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333438 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333445 | orchestrator | 2025-06-01 04:51:38.333451 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.333536 | orchestrator | Sunday 01 June 2025 04:44:23 +0000 (0:00:00.289) 0:03:26.382 *********** 2025-06-01 04:51:38.333542 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333549 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333555 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333561 | orchestrator | 2025-06-01 04:51:38.333568 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.333574 | orchestrator | Sunday 01 June 2025 04:44:24 +0000 (0:00:00.314) 0:03:26.697 *********** 2025-06-01 04:51:38.333580 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333586 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333593 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333599 | orchestrator | 2025-06-01 04:51:38.333605 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.333612 | orchestrator | Sunday 01 June 2025 04:44:24 +0000 (0:00:00.549) 0:03:27.246 *********** 2025-06-01 04:51:38.333618 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333624 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333630 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333636 | orchestrator | 2025-06-01 04:51:38.333643 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.333649 | orchestrator | Sunday 01 June 2025 04:44:24 +0000 (0:00:00.286) 0:03:27.532 *********** 2025-06-01 04:51:38.333655 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333662 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333668 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333674 | orchestrator | 2025-06-01 04:51:38.333680 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.333686 | orchestrator | Sunday 01 June 2025 04:44:25 +0000 (0:00:00.390) 0:03:27.922 *********** 2025-06-01 04:51:38.333693 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333699 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333705 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333711 | orchestrator | 2025-06-01 04:51:38.333717 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.333724 | orchestrator | Sunday 01 June 2025 04:44:25 +0000 (0:00:00.329) 0:03:28.252 *********** 2025-06-01 04:51:38.333730 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333736 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.333742 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.333748 | orchestrator | 2025-06-01 04:51:38.333755 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.333761 | orchestrator | Sunday 01 June 2025 04:44:26 +0000 (0:00:00.580) 0:03:28.833 *********** 2025-06-01 04:51:38.333767 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333773 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333779 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333785 | orchestrator | 2025-06-01 04:51:38.333792 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.333798 | orchestrator | Sunday 01 June 2025 04:44:26 +0000 (0:00:00.325) 0:03:29.158 *********** 2025-06-01 04:51:38.333804 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333810 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333816 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333822 | orchestrator | 2025-06-01 04:51:38.333828 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.333835 | orchestrator | Sunday 01 June 2025 04:44:26 +0000 (0:00:00.359) 0:03:29.518 *********** 2025-06-01 04:51:38.333841 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333847 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333853 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333859 | orchestrator | 2025-06-01 04:51:38.333869 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-01 04:51:38.333876 | orchestrator | Sunday 01 June 2025 04:44:27 +0000 (0:00:00.909) 0:03:30.427 *********** 2025-06-01 04:51:38.333886 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.333892 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.333898 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.333904 | orchestrator | 2025-06-01 04:51:38.333910 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-01 04:51:38.333916 | orchestrator | Sunday 01 June 2025 04:44:28 +0000 (0:00:00.322) 0:03:30.750 *********** 2025-06-01 04:51:38.333923 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.333929 | orchestrator | 2025-06-01 04:51:38.333935 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-01 04:51:38.333941 | orchestrator | Sunday 01 June 2025 04:44:28 +0000 (0:00:00.502) 0:03:31.252 *********** 2025-06-01 04:51:38.333947 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.333954 | orchestrator | 2025-06-01 04:51:38.333960 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-01 04:51:38.333966 | orchestrator | Sunday 01 June 2025 04:44:28 +0000 (0:00:00.118) 0:03:31.371 *********** 2025-06-01 04:51:38.333972 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 04:51:38.333978 | orchestrator | 2025-06-01 04:51:38.333988 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-01 04:51:38.333995 | orchestrator | Sunday 01 June 2025 04:44:30 +0000 (0:00:01.243) 0:03:32.614 *********** 2025-06-01 04:51:38.334001 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.334007 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.334013 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.334107 | orchestrator | 2025-06-01 04:51:38.334114 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-01 04:51:38.334120 | orchestrator | Sunday 01 June 2025 04:44:30 +0000 (0:00:00.332) 0:03:32.947 *********** 2025-06-01 04:51:38.334126 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.334132 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.334138 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.334144 | orchestrator | 2025-06-01 04:51:38.334150 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-01 04:51:38.334157 | orchestrator | Sunday 01 June 2025 04:44:30 +0000 (0:00:00.350) 0:03:33.297 *********** 2025-06-01 04:51:38.334163 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334169 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334175 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334181 | orchestrator | 2025-06-01 04:51:38.334187 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-01 04:51:38.334193 | orchestrator | Sunday 01 June 2025 04:44:32 +0000 (0:00:01.361) 0:03:34.658 *********** 2025-06-01 04:51:38.334199 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334205 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334211 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334217 | orchestrator | 2025-06-01 04:51:38.334224 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-01 04:51:38.334230 | orchestrator | Sunday 01 June 2025 04:44:33 +0000 (0:00:00.919) 0:03:35.578 *********** 2025-06-01 04:51:38.334236 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334242 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334248 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334254 | orchestrator | 2025-06-01 04:51:38.334260 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-01 04:51:38.334266 | orchestrator | Sunday 01 June 2025 04:44:33 +0000 (0:00:00.656) 0:03:36.234 *********** 2025-06-01 04:51:38.334273 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.334279 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.334285 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.334291 | orchestrator | 2025-06-01 04:51:38.334297 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-01 04:51:38.334308 | orchestrator | Sunday 01 June 2025 04:44:34 +0000 (0:00:00.677) 0:03:36.912 *********** 2025-06-01 04:51:38.334314 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334320 | orchestrator | 2025-06-01 04:51:38.334326 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-01 04:51:38.334332 | orchestrator | Sunday 01 June 2025 04:44:35 +0000 (0:00:01.247) 0:03:38.159 *********** 2025-06-01 04:51:38.334339 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.334345 | orchestrator | 2025-06-01 04:51:38.334351 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-01 04:51:38.334357 | orchestrator | Sunday 01 June 2025 04:44:36 +0000 (0:00:00.676) 0:03:38.835 *********** 2025-06-01 04:51:38.334363 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 04:51:38.334369 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.334375 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.334382 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:51:38.334388 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-01 04:51:38.334394 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:51:38.334400 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:51:38.334406 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-01 04:51:38.334412 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:51:38.334418 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-01 04:51:38.334424 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-01 04:51:38.334430 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-01 04:51:38.334437 | orchestrator | 2025-06-01 04:51:38.334443 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-01 04:51:38.334449 | orchestrator | Sunday 01 June 2025 04:44:39 +0000 (0:00:03.389) 0:03:42.225 *********** 2025-06-01 04:51:38.334455 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334461 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334471 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334477 | orchestrator | 2025-06-01 04:51:38.334483 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-01 04:51:38.334489 | orchestrator | Sunday 01 June 2025 04:44:41 +0000 (0:00:01.411) 0:03:43.636 *********** 2025-06-01 04:51:38.334495 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.334502 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.334508 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.334550 | orchestrator | 2025-06-01 04:51:38.334557 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-01 04:51:38.334563 | orchestrator | Sunday 01 June 2025 04:44:41 +0000 (0:00:00.327) 0:03:43.964 *********** 2025-06-01 04:51:38.334569 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.334575 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.334581 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.334587 | orchestrator | 2025-06-01 04:51:38.334594 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-01 04:51:38.334600 | orchestrator | Sunday 01 June 2025 04:44:41 +0000 (0:00:00.409) 0:03:44.374 *********** 2025-06-01 04:51:38.334606 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334612 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334618 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334625 | orchestrator | 2025-06-01 04:51:38.334631 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-01 04:51:38.334662 | orchestrator | Sunday 01 June 2025 04:44:43 +0000 (0:00:01.732) 0:03:46.106 *********** 2025-06-01 04:51:38.334670 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334676 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334682 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334688 | orchestrator | 2025-06-01 04:51:38.334699 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-01 04:51:38.334705 | orchestrator | Sunday 01 June 2025 04:44:44 +0000 (0:00:01.458) 0:03:47.565 *********** 2025-06-01 04:51:38.334711 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.334717 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.334724 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.334730 | orchestrator | 2025-06-01 04:51:38.334736 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-01 04:51:38.334742 | orchestrator | Sunday 01 June 2025 04:44:45 +0000 (0:00:00.403) 0:03:47.968 *********** 2025-06-01 04:51:38.334748 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.334755 | orchestrator | 2025-06-01 04:51:38.334761 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-01 04:51:38.334767 | orchestrator | Sunday 01 June 2025 04:44:46 +0000 (0:00:00.614) 0:03:48.583 *********** 2025-06-01 04:51:38.334773 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.334779 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.334785 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.334792 | orchestrator | 2025-06-01 04:51:38.334798 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-01 04:51:38.334804 | orchestrator | Sunday 01 June 2025 04:44:46 +0000 (0:00:00.673) 0:03:49.256 *********** 2025-06-01 04:51:38.334810 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.334816 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.334822 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.334828 | orchestrator | 2025-06-01 04:51:38.334834 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-01 04:51:38.334841 | orchestrator | Sunday 01 June 2025 04:44:47 +0000 (0:00:00.325) 0:03:49.582 *********** 2025-06-01 04:51:38.334847 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.334853 | orchestrator | 2025-06-01 04:51:38.334859 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-01 04:51:38.334865 | orchestrator | Sunday 01 June 2025 04:44:47 +0000 (0:00:00.519) 0:03:50.102 *********** 2025-06-01 04:51:38.334871 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334877 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334884 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334890 | orchestrator | 2025-06-01 04:51:38.334896 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-01 04:51:38.334902 | orchestrator | Sunday 01 June 2025 04:44:49 +0000 (0:00:02.037) 0:03:52.139 *********** 2025-06-01 04:51:38.334908 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334914 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334920 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334926 | orchestrator | 2025-06-01 04:51:38.334932 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-01 04:51:38.334939 | orchestrator | Sunday 01 June 2025 04:44:50 +0000 (0:00:01.137) 0:03:53.277 *********** 2025-06-01 04:51:38.334945 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334951 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334957 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334963 | orchestrator | 2025-06-01 04:51:38.334969 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-01 04:51:38.334975 | orchestrator | Sunday 01 June 2025 04:44:52 +0000 (0:00:01.879) 0:03:55.157 *********** 2025-06-01 04:51:38.334981 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.334987 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.334993 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.334999 | orchestrator | 2025-06-01 04:51:38.335005 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-01 04:51:38.335011 | orchestrator | Sunday 01 June 2025 04:44:54 +0000 (0:00:01.968) 0:03:57.125 *********** 2025-06-01 04:51:38.335022 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.335028 | orchestrator | 2025-06-01 04:51:38.335034 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-01 04:51:38.335040 | orchestrator | Sunday 01 June 2025 04:44:55 +0000 (0:00:00.620) 0:03:57.745 *********** 2025-06-01 04:51:38.335050 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-01 04:51:38.335056 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335062 | orchestrator | 2025-06-01 04:51:38.335068 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-01 04:51:38.335074 | orchestrator | Sunday 01 June 2025 04:45:17 +0000 (0:00:21.860) 0:04:19.605 *********** 2025-06-01 04:51:38.335081 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335087 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335093 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335098 | orchestrator | 2025-06-01 04:51:38.335103 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-01 04:51:38.335109 | orchestrator | Sunday 01 June 2025 04:45:26 +0000 (0:00:09.910) 0:04:29.516 *********** 2025-06-01 04:51:38.335114 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335119 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335125 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335130 | orchestrator | 2025-06-01 04:51:38.335135 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-01 04:51:38.335141 | orchestrator | Sunday 01 June 2025 04:45:27 +0000 (0:00:00.523) 0:04:30.040 *********** 2025-06-01 04:51:38.335165 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-01 04:51:38.335174 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-01 04:51:38.335180 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-01 04:51:38.335187 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-01 04:51:38.335193 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-01 04:51:38.335199 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9c233c7c0b7d902f479568fba081be84ea84a599'}])  2025-06-01 04:51:38.335209 | orchestrator | 2025-06-01 04:51:38.335215 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 04:51:38.335220 | orchestrator | Sunday 01 June 2025 04:45:42 +0000 (0:00:14.844) 0:04:44.885 *********** 2025-06-01 04:51:38.335226 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335231 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335236 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335242 | orchestrator | 2025-06-01 04:51:38.335247 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-01 04:51:38.335252 | orchestrator | Sunday 01 June 2025 04:45:42 +0000 (0:00:00.390) 0:04:45.275 *********** 2025-06-01 04:51:38.335257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.335263 | orchestrator | 2025-06-01 04:51:38.335268 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-01 04:51:38.335274 | orchestrator | Sunday 01 June 2025 04:45:43 +0000 (0:00:00.917) 0:04:46.192 *********** 2025-06-01 04:51:38.335279 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335284 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335290 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335295 | orchestrator | 2025-06-01 04:51:38.335300 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-01 04:51:38.335309 | orchestrator | Sunday 01 June 2025 04:45:44 +0000 (0:00:00.408) 0:04:46.601 *********** 2025-06-01 04:51:38.335315 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335320 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335325 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335331 | orchestrator | 2025-06-01 04:51:38.335336 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-01 04:51:38.335341 | orchestrator | Sunday 01 June 2025 04:45:44 +0000 (0:00:00.336) 0:04:46.937 *********** 2025-06-01 04:51:38.335347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 04:51:38.335352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 04:51:38.335358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 04:51:38.335363 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335369 | orchestrator | 2025-06-01 04:51:38.335374 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-01 04:51:38.335379 | orchestrator | Sunday 01 June 2025 04:45:45 +0000 (0:00:00.874) 0:04:47.812 *********** 2025-06-01 04:51:38.335385 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335390 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335396 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335401 | orchestrator | 2025-06-01 04:51:38.335406 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-01 04:51:38.335412 | orchestrator | 2025-06-01 04:51:38.335417 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.335439 | orchestrator | Sunday 01 June 2025 04:45:46 +0000 (0:00:00.904) 0:04:48.717 *********** 2025-06-01 04:51:38.335445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.335450 | orchestrator | 2025-06-01 04:51:38.335456 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.335461 | orchestrator | Sunday 01 June 2025 04:45:46 +0000 (0:00:00.496) 0:04:49.213 *********** 2025-06-01 04:51:38.335467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.335472 | orchestrator | 2025-06-01 04:51:38.335477 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.335487 | orchestrator | Sunday 01 June 2025 04:45:47 +0000 (0:00:00.733) 0:04:49.947 *********** 2025-06-01 04:51:38.335492 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335498 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335503 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335508 | orchestrator | 2025-06-01 04:51:38.335527 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.335533 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:00.686) 0:04:50.634 *********** 2025-06-01 04:51:38.335538 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335544 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335549 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335554 | orchestrator | 2025-06-01 04:51:38.335559 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.335565 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:00.299) 0:04:50.934 *********** 2025-06-01 04:51:38.335570 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335576 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335581 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335586 | orchestrator | 2025-06-01 04:51:38.335592 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.335597 | orchestrator | Sunday 01 June 2025 04:45:48 +0000 (0:00:00.532) 0:04:51.466 *********** 2025-06-01 04:51:38.335602 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335608 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335613 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335618 | orchestrator | 2025-06-01 04:51:38.335624 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.335629 | orchestrator | Sunday 01 June 2025 04:45:49 +0000 (0:00:00.379) 0:04:51.846 *********** 2025-06-01 04:51:38.335634 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335640 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335645 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335650 | orchestrator | 2025-06-01 04:51:38.335656 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.335661 | orchestrator | Sunday 01 June 2025 04:45:50 +0000 (0:00:00.731) 0:04:52.577 *********** 2025-06-01 04:51:38.335667 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335672 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335677 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335682 | orchestrator | 2025-06-01 04:51:38.335688 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.335693 | orchestrator | Sunday 01 June 2025 04:45:50 +0000 (0:00:00.294) 0:04:52.872 *********** 2025-06-01 04:51:38.335699 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335704 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335709 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335714 | orchestrator | 2025-06-01 04:51:38.335720 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.335725 | orchestrator | Sunday 01 June 2025 04:45:50 +0000 (0:00:00.571) 0:04:53.443 *********** 2025-06-01 04:51:38.335730 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335736 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335741 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335746 | orchestrator | 2025-06-01 04:51:38.335752 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.335757 | orchestrator | Sunday 01 June 2025 04:45:51 +0000 (0:00:00.703) 0:04:54.146 *********** 2025-06-01 04:51:38.335763 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335768 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335773 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335779 | orchestrator | 2025-06-01 04:51:38.335784 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.335789 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:00.830) 0:04:54.976 *********** 2025-06-01 04:51:38.335798 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335809 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335815 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335820 | orchestrator | 2025-06-01 04:51:38.335825 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.335831 | orchestrator | Sunday 01 June 2025 04:45:52 +0000 (0:00:00.287) 0:04:55.264 *********** 2025-06-01 04:51:38.335836 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.335841 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.335847 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.335852 | orchestrator | 2025-06-01 04:51:38.335857 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.335863 | orchestrator | Sunday 01 June 2025 04:45:53 +0000 (0:00:00.592) 0:04:55.856 *********** 2025-06-01 04:51:38.335868 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335873 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335879 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335884 | orchestrator | 2025-06-01 04:51:38.335889 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.335894 | orchestrator | Sunday 01 June 2025 04:45:53 +0000 (0:00:00.334) 0:04:56.191 *********** 2025-06-01 04:51:38.335900 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335905 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335910 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335916 | orchestrator | 2025-06-01 04:51:38.335938 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.335945 | orchestrator | Sunday 01 June 2025 04:45:53 +0000 (0:00:00.277) 0:04:56.468 *********** 2025-06-01 04:51:38.335950 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335955 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335961 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335966 | orchestrator | 2025-06-01 04:51:38.335972 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.335977 | orchestrator | Sunday 01 June 2025 04:45:54 +0000 (0:00:00.293) 0:04:56.762 *********** 2025-06-01 04:51:38.335982 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.335988 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.335993 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.335999 | orchestrator | 2025-06-01 04:51:38.336004 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.336009 | orchestrator | Sunday 01 June 2025 04:45:54 +0000 (0:00:00.657) 0:04:57.419 *********** 2025-06-01 04:51:38.336015 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336020 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336025 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.336031 | orchestrator | 2025-06-01 04:51:38.336036 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.336042 | orchestrator | Sunday 01 June 2025 04:45:55 +0000 (0:00:00.301) 0:04:57.721 *********** 2025-06-01 04:51:38.336047 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.336052 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.336058 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336063 | orchestrator | 2025-06-01 04:51:38.336069 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.336074 | orchestrator | Sunday 01 June 2025 04:45:55 +0000 (0:00:00.340) 0:04:58.061 *********** 2025-06-01 04:51:38.336079 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.336085 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.336090 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336096 | orchestrator | 2025-06-01 04:51:38.336101 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.336106 | orchestrator | Sunday 01 June 2025 04:45:55 +0000 (0:00:00.330) 0:04:58.391 *********** 2025-06-01 04:51:38.336112 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.336117 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.336126 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336132 | orchestrator | 2025-06-01 04:51:38.336137 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-01 04:51:38.336142 | orchestrator | Sunday 01 June 2025 04:45:56 +0000 (0:00:00.818) 0:04:59.210 *********** 2025-06-01 04:51:38.336148 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:51:38.336153 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:51:38.336159 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:51:38.336164 | orchestrator | 2025-06-01 04:51:38.336169 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-01 04:51:38.336175 | orchestrator | Sunday 01 June 2025 04:45:57 +0000 (0:00:00.616) 0:04:59.826 *********** 2025-06-01 04:51:38.336180 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.336186 | orchestrator | 2025-06-01 04:51:38.336191 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-01 04:51:38.336196 | orchestrator | Sunday 01 June 2025 04:45:57 +0000 (0:00:00.521) 0:05:00.347 *********** 2025-06-01 04:51:38.336202 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.336207 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.336212 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.336218 | orchestrator | 2025-06-01 04:51:38.336223 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-01 04:51:38.336229 | orchestrator | Sunday 01 June 2025 04:45:58 +0000 (0:00:00.719) 0:05:01.067 *********** 2025-06-01 04:51:38.336234 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336239 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336245 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.336250 | orchestrator | 2025-06-01 04:51:38.336256 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-01 04:51:38.336261 | orchestrator | Sunday 01 June 2025 04:45:58 +0000 (0:00:00.271) 0:05:01.339 *********** 2025-06-01 04:51:38.336266 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 04:51:38.336272 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 04:51:38.336277 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 04:51:38.336286 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-01 04:51:38.336291 | orchestrator | 2025-06-01 04:51:38.336297 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-01 04:51:38.336302 | orchestrator | Sunday 01 June 2025 04:46:08 +0000 (0:00:09.675) 0:05:11.014 *********** 2025-06-01 04:51:38.336307 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.336313 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.336318 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336324 | orchestrator | 2025-06-01 04:51:38.336329 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-01 04:51:38.336334 | orchestrator | Sunday 01 June 2025 04:46:08 +0000 (0:00:00.328) 0:05:11.342 *********** 2025-06-01 04:51:38.336340 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 04:51:38.336345 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 04:51:38.336350 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 04:51:38.336356 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-01 04:51:38.336361 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.336367 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.336372 | orchestrator | 2025-06-01 04:51:38.336377 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-01 04:51:38.336399 | orchestrator | Sunday 01 June 2025 04:46:11 +0000 (0:00:02.862) 0:05:14.205 *********** 2025-06-01 04:51:38.336405 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 04:51:38.336417 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 04:51:38.336422 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 04:51:38.336428 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 04:51:38.336433 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-01 04:51:38.336438 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-01 04:51:38.336444 | orchestrator | 2025-06-01 04:51:38.336449 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-01 04:51:38.336454 | orchestrator | Sunday 01 June 2025 04:46:12 +0000 (0:00:01.189) 0:05:15.395 *********** 2025-06-01 04:51:38.336460 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.336465 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.336471 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336476 | orchestrator | 2025-06-01 04:51:38.336481 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-01 04:51:38.336487 | orchestrator | Sunday 01 June 2025 04:46:13 +0000 (0:00:00.709) 0:05:16.104 *********** 2025-06-01 04:51:38.336492 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336497 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336503 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.336508 | orchestrator | 2025-06-01 04:51:38.336541 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-01 04:51:38.336547 | orchestrator | Sunday 01 June 2025 04:46:13 +0000 (0:00:00.333) 0:05:16.438 *********** 2025-06-01 04:51:38.336553 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336558 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336563 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.336569 | orchestrator | 2025-06-01 04:51:38.336574 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-01 04:51:38.336580 | orchestrator | Sunday 01 June 2025 04:46:14 +0000 (0:00:00.314) 0:05:16.752 *********** 2025-06-01 04:51:38.336585 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.336591 | orchestrator | 2025-06-01 04:51:38.336596 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-01 04:51:38.336602 | orchestrator | Sunday 01 June 2025 04:46:14 +0000 (0:00:00.768) 0:05:17.521 *********** 2025-06-01 04:51:38.336606 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336611 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336616 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.336621 | orchestrator | 2025-06-01 04:51:38.336626 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-01 04:51:38.336630 | orchestrator | Sunday 01 June 2025 04:46:15 +0000 (0:00:00.307) 0:05:17.828 *********** 2025-06-01 04:51:38.336635 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336640 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336645 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.336649 | orchestrator | 2025-06-01 04:51:38.336654 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-01 04:51:38.336659 | orchestrator | Sunday 01 June 2025 04:46:15 +0000 (0:00:00.302) 0:05:18.131 *********** 2025-06-01 04:51:38.336664 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.336669 | orchestrator | 2025-06-01 04:51:38.336673 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-01 04:51:38.336678 | orchestrator | Sunday 01 June 2025 04:46:16 +0000 (0:00:00.808) 0:05:18.940 *********** 2025-06-01 04:51:38.336683 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.336688 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.336693 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.336697 | orchestrator | 2025-06-01 04:51:38.336702 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-01 04:51:38.336707 | orchestrator | Sunday 01 June 2025 04:46:17 +0000 (0:00:01.184) 0:05:20.124 *********** 2025-06-01 04:51:38.336716 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.336721 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.336726 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.336731 | orchestrator | 2025-06-01 04:51:38.336735 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-01 04:51:38.336741 | orchestrator | Sunday 01 June 2025 04:46:18 +0000 (0:00:01.153) 0:05:21.277 *********** 2025-06-01 04:51:38.336745 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.336750 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.336755 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.336760 | orchestrator | 2025-06-01 04:51:38.336768 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-01 04:51:38.336773 | orchestrator | Sunday 01 June 2025 04:46:20 +0000 (0:00:02.181) 0:05:23.459 *********** 2025-06-01 04:51:38.336778 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.336782 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.336787 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.336792 | orchestrator | 2025-06-01 04:51:38.336797 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-01 04:51:38.336801 | orchestrator | Sunday 01 June 2025 04:46:22 +0000 (0:00:02.039) 0:05:25.498 *********** 2025-06-01 04:51:38.336806 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.336811 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.336816 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-01 04:51:38.336821 | orchestrator | 2025-06-01 04:51:38.336826 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-01 04:51:38.336830 | orchestrator | Sunday 01 June 2025 04:46:23 +0000 (0:00:00.381) 0:05:25.879 *********** 2025-06-01 04:51:38.336835 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-01 04:51:38.336840 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-01 04:51:38.336862 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-01 04:51:38.336867 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-01 04:51:38.336872 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-01 04:51:38.336877 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.336882 | orchestrator | 2025-06-01 04:51:38.336887 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-01 04:51:38.336892 | orchestrator | Sunday 01 June 2025 04:46:53 +0000 (0:00:29.879) 0:05:55.759 *********** 2025-06-01 04:51:38.336897 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.336901 | orchestrator | 2025-06-01 04:51:38.336906 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-01 04:51:38.336911 | orchestrator | Sunday 01 June 2025 04:46:54 +0000 (0:00:01.689) 0:05:57.448 *********** 2025-06-01 04:51:38.336916 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336920 | orchestrator | 2025-06-01 04:51:38.336925 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-01 04:51:38.336930 | orchestrator | Sunday 01 June 2025 04:46:55 +0000 (0:00:00.846) 0:05:58.295 *********** 2025-06-01 04:51:38.336935 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.336939 | orchestrator | 2025-06-01 04:51:38.336944 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-01 04:51:38.336949 | orchestrator | Sunday 01 June 2025 04:46:55 +0000 (0:00:00.159) 0:05:58.455 *********** 2025-06-01 04:51:38.336954 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-01 04:51:38.336959 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-01 04:51:38.336967 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-01 04:51:38.336972 | orchestrator | 2025-06-01 04:51:38.336977 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-01 04:51:38.336981 | orchestrator | Sunday 01 June 2025 04:47:02 +0000 (0:00:06.308) 0:06:04.763 *********** 2025-06-01 04:51:38.336986 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-01 04:51:38.336991 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-01 04:51:38.336996 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-01 04:51:38.337000 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-01 04:51:38.337005 | orchestrator | 2025-06-01 04:51:38.337010 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 04:51:38.337015 | orchestrator | Sunday 01 June 2025 04:47:06 +0000 (0:00:04.619) 0:06:09.383 *********** 2025-06-01 04:51:38.337020 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.337025 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.337029 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.337034 | orchestrator | 2025-06-01 04:51:38.337039 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-01 04:51:38.337044 | orchestrator | Sunday 01 June 2025 04:47:07 +0000 (0:00:01.015) 0:06:10.398 *********** 2025-06-01 04:51:38.337048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:51:38.337053 | orchestrator | 2025-06-01 04:51:38.337058 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-01 04:51:38.337063 | orchestrator | Sunday 01 June 2025 04:47:08 +0000 (0:00:00.519) 0:06:10.917 *********** 2025-06-01 04:51:38.337067 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.337072 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.337077 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.337082 | orchestrator | 2025-06-01 04:51:38.337086 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-01 04:51:38.337091 | orchestrator | Sunday 01 June 2025 04:47:08 +0000 (0:00:00.360) 0:06:11.278 *********** 2025-06-01 04:51:38.337096 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.337101 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.337106 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.337110 | orchestrator | 2025-06-01 04:51:38.337115 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-01 04:51:38.337120 | orchestrator | Sunday 01 June 2025 04:47:10 +0000 (0:00:01.612) 0:06:12.891 *********** 2025-06-01 04:51:38.337128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 04:51:38.337133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 04:51:38.337138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 04:51:38.337142 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.337147 | orchestrator | 2025-06-01 04:51:38.337152 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-01 04:51:38.337157 | orchestrator | Sunday 01 June 2025 04:47:10 +0000 (0:00:00.606) 0:06:13.497 *********** 2025-06-01 04:51:38.337161 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.337166 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.337171 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.337176 | orchestrator | 2025-06-01 04:51:38.337181 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-01 04:51:38.337186 | orchestrator | 2025-06-01 04:51:38.337190 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.337195 | orchestrator | Sunday 01 June 2025 04:47:11 +0000 (0:00:00.582) 0:06:14.080 *********** 2025-06-01 04:51:38.337200 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.337209 | orchestrator | 2025-06-01 04:51:38.337214 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.337233 | orchestrator | Sunday 01 June 2025 04:47:12 +0000 (0:00:00.785) 0:06:14.866 *********** 2025-06-01 04:51:38.337239 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.337244 | orchestrator | 2025-06-01 04:51:38.337248 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.337253 | orchestrator | Sunday 01 June 2025 04:47:12 +0000 (0:00:00.559) 0:06:15.425 *********** 2025-06-01 04:51:38.337258 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337263 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337267 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337272 | orchestrator | 2025-06-01 04:51:38.337277 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.337282 | orchestrator | Sunday 01 June 2025 04:47:13 +0000 (0:00:00.323) 0:06:15.748 *********** 2025-06-01 04:51:38.337287 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337292 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337296 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337301 | orchestrator | 2025-06-01 04:51:38.337306 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.337311 | orchestrator | Sunday 01 June 2025 04:47:14 +0000 (0:00:01.062) 0:06:16.811 *********** 2025-06-01 04:51:38.337316 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337320 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337325 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337330 | orchestrator | 2025-06-01 04:51:38.337335 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.337339 | orchestrator | Sunday 01 June 2025 04:47:14 +0000 (0:00:00.693) 0:06:17.504 *********** 2025-06-01 04:51:38.337344 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337349 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337354 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337358 | orchestrator | 2025-06-01 04:51:38.337363 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.337368 | orchestrator | Sunday 01 June 2025 04:47:15 +0000 (0:00:00.704) 0:06:18.208 *********** 2025-06-01 04:51:38.337373 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337378 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337382 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337387 | orchestrator | 2025-06-01 04:51:38.337392 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.337397 | orchestrator | Sunday 01 June 2025 04:47:15 +0000 (0:00:00.345) 0:06:18.554 *********** 2025-06-01 04:51:38.337402 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337406 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337411 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337416 | orchestrator | 2025-06-01 04:51:38.337421 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.337426 | orchestrator | Sunday 01 June 2025 04:47:16 +0000 (0:00:00.660) 0:06:19.215 *********** 2025-06-01 04:51:38.337431 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337435 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337440 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337445 | orchestrator | 2025-06-01 04:51:38.337450 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.337455 | orchestrator | Sunday 01 June 2025 04:47:17 +0000 (0:00:00.373) 0:06:19.588 *********** 2025-06-01 04:51:38.337459 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337464 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337469 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337474 | orchestrator | 2025-06-01 04:51:38.337479 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.337487 | orchestrator | Sunday 01 June 2025 04:47:17 +0000 (0:00:00.693) 0:06:20.282 *********** 2025-06-01 04:51:38.337492 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337497 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337502 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337507 | orchestrator | 2025-06-01 04:51:38.337526 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.337531 | orchestrator | Sunday 01 June 2025 04:47:18 +0000 (0:00:00.685) 0:06:20.967 *********** 2025-06-01 04:51:38.337538 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337547 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337555 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337566 | orchestrator | 2025-06-01 04:51:38.337580 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.337586 | orchestrator | Sunday 01 June 2025 04:47:19 +0000 (0:00:00.695) 0:06:21.662 *********** 2025-06-01 04:51:38.337593 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337601 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337608 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337619 | orchestrator | 2025-06-01 04:51:38.337626 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.337634 | orchestrator | Sunday 01 June 2025 04:47:19 +0000 (0:00:00.310) 0:06:21.972 *********** 2025-06-01 04:51:38.337641 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337648 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337655 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337664 | orchestrator | 2025-06-01 04:51:38.337672 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.337680 | orchestrator | Sunday 01 June 2025 04:47:19 +0000 (0:00:00.319) 0:06:22.292 *********** 2025-06-01 04:51:38.337688 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337696 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337704 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337711 | orchestrator | 2025-06-01 04:51:38.337716 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.337720 | orchestrator | Sunday 01 June 2025 04:47:20 +0000 (0:00:00.322) 0:06:22.615 *********** 2025-06-01 04:51:38.337725 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337730 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337734 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337739 | orchestrator | 2025-06-01 04:51:38.337744 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.337749 | orchestrator | Sunday 01 June 2025 04:47:20 +0000 (0:00:00.630) 0:06:23.245 *********** 2025-06-01 04:51:38.337757 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337762 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337767 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337772 | orchestrator | 2025-06-01 04:51:38.337776 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.337781 | orchestrator | Sunday 01 June 2025 04:47:21 +0000 (0:00:00.333) 0:06:23.579 *********** 2025-06-01 04:51:38.337786 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337791 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337795 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337800 | orchestrator | 2025-06-01 04:51:38.337805 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.337809 | orchestrator | Sunday 01 June 2025 04:47:21 +0000 (0:00:00.312) 0:06:23.891 *********** 2025-06-01 04:51:38.337814 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337819 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337823 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337828 | orchestrator | 2025-06-01 04:51:38.337833 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.337838 | orchestrator | Sunday 01 June 2025 04:47:21 +0000 (0:00:00.288) 0:06:24.180 *********** 2025-06-01 04:51:38.337847 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337852 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337856 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337861 | orchestrator | 2025-06-01 04:51:38.337866 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.337871 | orchestrator | Sunday 01 June 2025 04:47:22 +0000 (0:00:00.708) 0:06:24.889 *********** 2025-06-01 04:51:38.337876 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337880 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337885 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337890 | orchestrator | 2025-06-01 04:51:38.337894 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-01 04:51:38.337899 | orchestrator | Sunday 01 June 2025 04:47:22 +0000 (0:00:00.546) 0:06:25.435 *********** 2025-06-01 04:51:38.337904 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.337909 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.337913 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.337918 | orchestrator | 2025-06-01 04:51:38.337923 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-01 04:51:38.337927 | orchestrator | Sunday 01 June 2025 04:47:23 +0000 (0:00:00.342) 0:06:25.778 *********** 2025-06-01 04:51:38.337932 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 04:51:38.337937 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:51:38.337942 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:51:38.337947 | orchestrator | 2025-06-01 04:51:38.337951 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-01 04:51:38.337956 | orchestrator | Sunday 01 June 2025 04:47:24 +0000 (0:00:01.004) 0:06:26.783 *********** 2025-06-01 04:51:38.337961 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.337966 | orchestrator | 2025-06-01 04:51:38.337970 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-01 04:51:38.337975 | orchestrator | Sunday 01 June 2025 04:47:24 +0000 (0:00:00.773) 0:06:27.557 *********** 2025-06-01 04:51:38.337980 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.337984 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.337989 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.337994 | orchestrator | 2025-06-01 04:51:38.337998 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-01 04:51:38.338003 | orchestrator | Sunday 01 June 2025 04:47:25 +0000 (0:00:00.319) 0:06:27.877 *********** 2025-06-01 04:51:38.338008 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338013 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338041 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.338046 | orchestrator | 2025-06-01 04:51:38.338051 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-01 04:51:38.338056 | orchestrator | Sunday 01 June 2025 04:47:25 +0000 (0:00:00.321) 0:06:28.198 *********** 2025-06-01 04:51:38.338060 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.338065 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.338070 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.338075 | orchestrator | 2025-06-01 04:51:38.338079 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-01 04:51:38.338084 | orchestrator | Sunday 01 June 2025 04:47:26 +0000 (0:00:00.891) 0:06:29.090 *********** 2025-06-01 04:51:38.338092 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.338097 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.338102 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.338106 | orchestrator | 2025-06-01 04:51:38.338111 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-01 04:51:38.338116 | orchestrator | Sunday 01 June 2025 04:47:26 +0000 (0:00:00.319) 0:06:29.410 *********** 2025-06-01 04:51:38.338125 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 04:51:38.338130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 04:51:38.338135 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 04:51:38.338140 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 04:51:38.338144 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 04:51:38.338149 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 04:51:38.338154 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 04:51:38.338163 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 04:51:38.338169 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 04:51:38.338174 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 04:51:38.338178 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 04:51:38.338183 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 04:51:38.338188 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 04:51:38.338193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 04:51:38.338198 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 04:51:38.338202 | orchestrator | 2025-06-01 04:51:38.338207 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-01 04:51:38.338212 | orchestrator | Sunday 01 June 2025 04:47:29 +0000 (0:00:03.074) 0:06:32.485 *********** 2025-06-01 04:51:38.338217 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338221 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338226 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.338231 | orchestrator | 2025-06-01 04:51:38.338236 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-01 04:51:38.338241 | orchestrator | Sunday 01 June 2025 04:47:30 +0000 (0:00:00.319) 0:06:32.804 *********** 2025-06-01 04:51:38.338245 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.338250 | orchestrator | 2025-06-01 04:51:38.338255 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-01 04:51:38.338260 | orchestrator | Sunday 01 June 2025 04:47:31 +0000 (0:00:00.927) 0:06:33.732 *********** 2025-06-01 04:51:38.338265 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 04:51:38.338269 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 04:51:38.338274 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 04:51:38.338279 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-01 04:51:38.338284 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-01 04:51:38.338289 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-01 04:51:38.338293 | orchestrator | 2025-06-01 04:51:38.338298 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-01 04:51:38.338303 | orchestrator | Sunday 01 June 2025 04:47:32 +0000 (0:00:00.948) 0:06:34.680 *********** 2025-06-01 04:51:38.338308 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.338312 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 04:51:38.338317 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:51:38.338322 | orchestrator | 2025-06-01 04:51:38.338330 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-01 04:51:38.338335 | orchestrator | Sunday 01 June 2025 04:47:33 +0000 (0:00:01.874) 0:06:36.555 *********** 2025-06-01 04:51:38.338340 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 04:51:38.338345 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 04:51:38.338350 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.338354 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 04:51:38.338359 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 04:51:38.338364 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.338369 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 04:51:38.338373 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 04:51:38.338378 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.338383 | orchestrator | 2025-06-01 04:51:38.338388 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-01 04:51:38.338392 | orchestrator | Sunday 01 June 2025 04:47:35 +0000 (0:00:01.418) 0:06:37.973 *********** 2025-06-01 04:51:38.338397 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.338402 | orchestrator | 2025-06-01 04:51:38.338407 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-01 04:51:38.338416 | orchestrator | Sunday 01 June 2025 04:47:37 +0000 (0:00:01.996) 0:06:39.970 *********** 2025-06-01 04:51:38.338421 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.338426 | orchestrator | 2025-06-01 04:51:38.338431 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-01 04:51:38.338435 | orchestrator | Sunday 01 June 2025 04:47:37 +0000 (0:00:00.533) 0:06:40.503 *********** 2025-06-01 04:51:38.338440 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-baa7c707-8012-580f-8c9e-09def35a523c', 'data_vg': 'ceph-baa7c707-8012-580f-8c9e-09def35a523c'}) 2025-06-01 04:51:38.338446 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f', 'data_vg': 'ceph-a7ddc8d9-d495-524c-b0f4-e7d8a8d73f0f'}) 2025-06-01 04:51:38.338451 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24633ad7-3e48-5d36-bc1c-15adae99ed01', 'data_vg': 'ceph-24633ad7-3e48-5d36-bc1c-15adae99ed01'}) 2025-06-01 04:51:38.338458 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-308e0632-b76f-5a8e-af6f-04e4a02ef5a9', 'data_vg': 'ceph-308e0632-b76f-5a8e-af6f-04e4a02ef5a9'}) 2025-06-01 04:51:38.338463 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1f9d798-cc3d-57c0-9350-8228d94606be', 'data_vg': 'ceph-c1f9d798-cc3d-57c0-9350-8228d94606be'}) 2025-06-01 04:51:38.338468 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2a6257e3-2619-5e00-b9d8-6074ce245854', 'data_vg': 'ceph-2a6257e3-2619-5e00-b9d8-6074ce245854'}) 2025-06-01 04:51:38.338473 | orchestrator | 2025-06-01 04:51:38.338478 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-01 04:51:38.338483 | orchestrator | Sunday 01 June 2025 04:48:16 +0000 (0:00:38.094) 0:07:18.598 *********** 2025-06-01 04:51:38.338487 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338492 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338497 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.338502 | orchestrator | 2025-06-01 04:51:38.338507 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-01 04:51:38.338541 | orchestrator | Sunday 01 June 2025 04:48:16 +0000 (0:00:00.681) 0:07:19.279 *********** 2025-06-01 04:51:38.338546 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.338551 | orchestrator | 2025-06-01 04:51:38.338556 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-01 04:51:38.338561 | orchestrator | Sunday 01 June 2025 04:48:17 +0000 (0:00:00.529) 0:07:19.808 *********** 2025-06-01 04:51:38.338569 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.338574 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.338579 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.338584 | orchestrator | 2025-06-01 04:51:38.338589 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-01 04:51:38.338594 | orchestrator | Sunday 01 June 2025 04:48:17 +0000 (0:00:00.641) 0:07:20.450 *********** 2025-06-01 04:51:38.338598 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.338603 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.338608 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.338613 | orchestrator | 2025-06-01 04:51:38.338617 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-01 04:51:38.338622 | orchestrator | Sunday 01 June 2025 04:48:20 +0000 (0:00:02.772) 0:07:23.223 *********** 2025-06-01 04:51:38.338627 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.338632 | orchestrator | 2025-06-01 04:51:38.338636 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-01 04:51:38.338641 | orchestrator | Sunday 01 June 2025 04:48:21 +0000 (0:00:00.528) 0:07:23.751 *********** 2025-06-01 04:51:38.338646 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.338651 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.338656 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.338660 | orchestrator | 2025-06-01 04:51:38.338665 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-01 04:51:38.338670 | orchestrator | Sunday 01 June 2025 04:48:22 +0000 (0:00:01.193) 0:07:24.944 *********** 2025-06-01 04:51:38.338675 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.338680 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.338684 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.338689 | orchestrator | 2025-06-01 04:51:38.338694 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-01 04:51:38.338699 | orchestrator | Sunday 01 June 2025 04:48:23 +0000 (0:00:01.474) 0:07:26.419 *********** 2025-06-01 04:51:38.338703 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.338708 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.338713 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.338717 | orchestrator | 2025-06-01 04:51:38.338722 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-01 04:51:38.338727 | orchestrator | Sunday 01 June 2025 04:48:25 +0000 (0:00:01.809) 0:07:28.229 *********** 2025-06-01 04:51:38.338732 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338737 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338741 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.338746 | orchestrator | 2025-06-01 04:51:38.338751 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-01 04:51:38.338756 | orchestrator | Sunday 01 June 2025 04:48:25 +0000 (0:00:00.334) 0:07:28.563 *********** 2025-06-01 04:51:38.338760 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338765 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338770 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.338775 | orchestrator | 2025-06-01 04:51:38.338779 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-01 04:51:38.338787 | orchestrator | Sunday 01 June 2025 04:48:26 +0000 (0:00:00.328) 0:07:28.891 *********** 2025-06-01 04:51:38.338792 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 04:51:38.338797 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-01 04:51:38.338802 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-06-01 04:51:38.338806 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-01 04:51:38.338811 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-01 04:51:38.338816 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-01 04:51:38.338821 | orchestrator | 2025-06-01 04:51:38.338825 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-01 04:51:38.338835 | orchestrator | Sunday 01 June 2025 04:48:27 +0000 (0:00:01.357) 0:07:30.249 *********** 2025-06-01 04:51:38.338840 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-01 04:51:38.338844 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-01 04:51:38.338849 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-01 04:51:38.338854 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-01 04:51:38.338859 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-01 04:51:38.338863 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-01 04:51:38.338868 | orchestrator | 2025-06-01 04:51:38.338873 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-01 04:51:38.338881 | orchestrator | Sunday 01 June 2025 04:48:29 +0000 (0:00:02.192) 0:07:32.441 *********** 2025-06-01 04:51:38.338886 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-01 04:51:38.338890 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-01 04:51:38.338895 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-01 04:51:38.338900 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-01 04:51:38.338905 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-01 04:51:38.338909 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-01 04:51:38.338914 | orchestrator | 2025-06-01 04:51:38.338919 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-01 04:51:38.338924 | orchestrator | Sunday 01 June 2025 04:48:33 +0000 (0:00:03.644) 0:07:36.086 *********** 2025-06-01 04:51:38.338929 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338934 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338938 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.338943 | orchestrator | 2025-06-01 04:51:38.338948 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-01 04:51:38.338953 | orchestrator | Sunday 01 June 2025 04:48:36 +0000 (0:00:02.655) 0:07:38.742 *********** 2025-06-01 04:51:38.338958 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338962 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.338967 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-01 04:51:38.338972 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.338977 | orchestrator | 2025-06-01 04:51:38.338982 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-01 04:51:38.338986 | orchestrator | Sunday 01 June 2025 04:48:48 +0000 (0:00:12.737) 0:07:51.480 *********** 2025-06-01 04:51:38.338991 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.338996 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339001 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339006 | orchestrator | 2025-06-01 04:51:38.339011 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 04:51:38.339016 | orchestrator | Sunday 01 June 2025 04:48:49 +0000 (0:00:00.877) 0:07:52.357 *********** 2025-06-01 04:51:38.339020 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339025 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339030 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339035 | orchestrator | 2025-06-01 04:51:38.339039 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-01 04:51:38.339044 | orchestrator | Sunday 01 June 2025 04:48:50 +0000 (0:00:00.737) 0:07:53.094 *********** 2025-06-01 04:51:38.339049 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.339054 | orchestrator | 2025-06-01 04:51:38.339058 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-01 04:51:38.339064 | orchestrator | Sunday 01 June 2025 04:48:51 +0000 (0:00:00.571) 0:07:53.666 *********** 2025-06-01 04:51:38.339068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.339076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.339081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.339085 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339090 | orchestrator | 2025-06-01 04:51:38.339094 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-01 04:51:38.339099 | orchestrator | Sunday 01 June 2025 04:48:51 +0000 (0:00:00.378) 0:07:54.044 *********** 2025-06-01 04:51:38.339103 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339108 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339112 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339117 | orchestrator | 2025-06-01 04:51:38.339122 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-01 04:51:38.339126 | orchestrator | Sunday 01 June 2025 04:48:51 +0000 (0:00:00.331) 0:07:54.376 *********** 2025-06-01 04:51:38.339131 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339135 | orchestrator | 2025-06-01 04:51:38.339140 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-01 04:51:38.339144 | orchestrator | Sunday 01 June 2025 04:48:52 +0000 (0:00:00.227) 0:07:54.603 *********** 2025-06-01 04:51:38.339149 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339153 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339158 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339162 | orchestrator | 2025-06-01 04:51:38.339167 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-01 04:51:38.339174 | orchestrator | Sunday 01 June 2025 04:48:52 +0000 (0:00:00.668) 0:07:55.272 *********** 2025-06-01 04:51:38.339179 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339183 | orchestrator | 2025-06-01 04:51:38.339188 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-01 04:51:38.339192 | orchestrator | Sunday 01 June 2025 04:48:52 +0000 (0:00:00.244) 0:07:55.517 *********** 2025-06-01 04:51:38.339197 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339201 | orchestrator | 2025-06-01 04:51:38.339206 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-01 04:51:38.339210 | orchestrator | Sunday 01 June 2025 04:48:53 +0000 (0:00:00.230) 0:07:55.748 *********** 2025-06-01 04:51:38.339215 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339219 | orchestrator | 2025-06-01 04:51:38.339224 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-01 04:51:38.339228 | orchestrator | Sunday 01 June 2025 04:48:53 +0000 (0:00:00.138) 0:07:55.886 *********** 2025-06-01 04:51:38.339233 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339237 | orchestrator | 2025-06-01 04:51:38.339242 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-01 04:51:38.339247 | orchestrator | Sunday 01 June 2025 04:48:53 +0000 (0:00:00.222) 0:07:56.108 *********** 2025-06-01 04:51:38.339251 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339256 | orchestrator | 2025-06-01 04:51:38.339262 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-01 04:51:38.339267 | orchestrator | Sunday 01 June 2025 04:48:53 +0000 (0:00:00.241) 0:07:56.350 *********** 2025-06-01 04:51:38.339272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.339276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.339281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.339285 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339290 | orchestrator | 2025-06-01 04:51:38.339295 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-01 04:51:38.339299 | orchestrator | Sunday 01 June 2025 04:48:54 +0000 (0:00:00.429) 0:07:56.779 *********** 2025-06-01 04:51:38.339304 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339308 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339313 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339321 | orchestrator | 2025-06-01 04:51:38.339326 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-01 04:51:38.339330 | orchestrator | Sunday 01 June 2025 04:48:54 +0000 (0:00:00.303) 0:07:57.083 *********** 2025-06-01 04:51:38.339335 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339339 | orchestrator | 2025-06-01 04:51:38.339344 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-01 04:51:38.339348 | orchestrator | Sunday 01 June 2025 04:48:55 +0000 (0:00:00.878) 0:07:57.962 *********** 2025-06-01 04:51:38.339353 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339357 | orchestrator | 2025-06-01 04:51:38.339362 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-01 04:51:38.339366 | orchestrator | 2025-06-01 04:51:38.339371 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.339375 | orchestrator | Sunday 01 June 2025 04:48:56 +0000 (0:00:00.673) 0:07:58.635 *********** 2025-06-01 04:51:38.339380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.339385 | orchestrator | 2025-06-01 04:51:38.339389 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.339394 | orchestrator | Sunday 01 June 2025 04:48:57 +0000 (0:00:01.176) 0:07:59.811 *********** 2025-06-01 04:51:38.339399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.339403 | orchestrator | 2025-06-01 04:51:38.339408 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.339412 | orchestrator | Sunday 01 June 2025 04:48:58 +0000 (0:00:01.205) 0:08:01.017 *********** 2025-06-01 04:51:38.339417 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339422 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339426 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.339431 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.339435 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.339440 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339444 | orchestrator | 2025-06-01 04:51:38.339449 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.339453 | orchestrator | Sunday 01 June 2025 04:48:59 +0000 (0:00:00.973) 0:08:01.990 *********** 2025-06-01 04:51:38.339458 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339462 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339467 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339471 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339476 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339481 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339485 | orchestrator | 2025-06-01 04:51:38.339490 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.339494 | orchestrator | Sunday 01 June 2025 04:49:00 +0000 (0:00:00.990) 0:08:02.980 *********** 2025-06-01 04:51:38.339499 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339503 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339508 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339522 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339526 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339531 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339535 | orchestrator | 2025-06-01 04:51:38.339540 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.339545 | orchestrator | Sunday 01 June 2025 04:49:01 +0000 (0:00:01.345) 0:08:04.326 *********** 2025-06-01 04:51:38.339549 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339554 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339561 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339566 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339575 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339579 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339584 | orchestrator | 2025-06-01 04:51:38.339588 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.339593 | orchestrator | Sunday 01 June 2025 04:49:02 +0000 (0:00:01.007) 0:08:05.333 *********** 2025-06-01 04:51:38.339597 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339602 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.339606 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339611 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.339615 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.339620 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339624 | orchestrator | 2025-06-01 04:51:38.339629 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.339633 | orchestrator | Sunday 01 June 2025 04:49:03 +0000 (0:00:00.878) 0:08:06.211 *********** 2025-06-01 04:51:38.339638 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339642 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339647 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339651 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339656 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339660 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339665 | orchestrator | 2025-06-01 04:51:38.339672 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.339676 | orchestrator | Sunday 01 June 2025 04:49:04 +0000 (0:00:00.644) 0:08:06.856 *********** 2025-06-01 04:51:38.339681 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339686 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339690 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339694 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339699 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339703 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339708 | orchestrator | 2025-06-01 04:51:38.339713 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.339717 | orchestrator | Sunday 01 June 2025 04:49:05 +0000 (0:00:00.901) 0:08:07.757 *********** 2025-06-01 04:51:38.339722 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.339726 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.339731 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.339735 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339740 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339744 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339749 | orchestrator | 2025-06-01 04:51:38.339753 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.339758 | orchestrator | Sunday 01 June 2025 04:49:06 +0000 (0:00:01.056) 0:08:08.814 *********** 2025-06-01 04:51:38.339762 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.339767 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.339771 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.339776 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339780 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339785 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339789 | orchestrator | 2025-06-01 04:51:38.339794 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.339798 | orchestrator | Sunday 01 June 2025 04:49:07 +0000 (0:00:01.296) 0:08:10.110 *********** 2025-06-01 04:51:38.339803 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339807 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339812 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339816 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339821 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339825 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339830 | orchestrator | 2025-06-01 04:51:38.339834 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.339843 | orchestrator | Sunday 01 June 2025 04:49:08 +0000 (0:00:00.592) 0:08:10.703 *********** 2025-06-01 04:51:38.339848 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.339852 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.339857 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.339861 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.339866 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.339870 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.339875 | orchestrator | 2025-06-01 04:51:38.339879 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.339884 | orchestrator | Sunday 01 June 2025 04:49:08 +0000 (0:00:00.798) 0:08:11.502 *********** 2025-06-01 04:51:38.339889 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339893 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339898 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339902 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339907 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339911 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339916 | orchestrator | 2025-06-01 04:51:38.339920 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.339925 | orchestrator | Sunday 01 June 2025 04:49:09 +0000 (0:00:00.636) 0:08:12.138 *********** 2025-06-01 04:51:38.339929 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339934 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339938 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339943 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339947 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339952 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339956 | orchestrator | 2025-06-01 04:51:38.339961 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.339966 | orchestrator | Sunday 01 June 2025 04:49:10 +0000 (0:00:00.844) 0:08:12.983 *********** 2025-06-01 04:51:38.339970 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.339975 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.339979 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.339984 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.339988 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.339993 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.339997 | orchestrator | 2025-06-01 04:51:38.340002 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.340006 | orchestrator | Sunday 01 June 2025 04:49:11 +0000 (0:00:00.630) 0:08:13.614 *********** 2025-06-01 04:51:38.340011 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.340018 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.340023 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.340027 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340032 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340036 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340041 | orchestrator | 2025-06-01 04:51:38.340045 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.340050 | orchestrator | Sunday 01 June 2025 04:49:11 +0000 (0:00:00.833) 0:08:14.448 *********** 2025-06-01 04:51:38.340054 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:51:38.340059 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:51:38.340063 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:51:38.340068 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340072 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340077 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340081 | orchestrator | 2025-06-01 04:51:38.340086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.340090 | orchestrator | Sunday 01 June 2025 04:49:12 +0000 (0:00:00.588) 0:08:15.036 *********** 2025-06-01 04:51:38.340095 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340100 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.340104 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.340114 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340118 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340123 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340127 | orchestrator | 2025-06-01 04:51:38.340134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.340139 | orchestrator | Sunday 01 June 2025 04:49:13 +0000 (0:00:00.910) 0:08:15.946 *********** 2025-06-01 04:51:38.340143 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340148 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.340152 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.340157 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340161 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340166 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340170 | orchestrator | 2025-06-01 04:51:38.340175 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.340179 | orchestrator | Sunday 01 June 2025 04:49:14 +0000 (0:00:00.656) 0:08:16.603 *********** 2025-06-01 04:51:38.340184 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340188 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.340193 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.340197 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340202 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340206 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340211 | orchestrator | 2025-06-01 04:51:38.340215 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-01 04:51:38.340220 | orchestrator | Sunday 01 June 2025 04:49:15 +0000 (0:00:01.305) 0:08:17.908 *********** 2025-06-01 04:51:38.340225 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.340229 | orchestrator | 2025-06-01 04:51:38.340234 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-01 04:51:38.340238 | orchestrator | Sunday 01 June 2025 04:49:19 +0000 (0:00:03.815) 0:08:21.724 *********** 2025-06-01 04:51:38.340243 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340247 | orchestrator | 2025-06-01 04:51:38.340252 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-01 04:51:38.340257 | orchestrator | Sunday 01 June 2025 04:49:21 +0000 (0:00:01.946) 0:08:23.671 *********** 2025-06-01 04:51:38.340261 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340266 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.340270 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.340275 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.340279 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.340284 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.340288 | orchestrator | 2025-06-01 04:51:38.340293 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-01 04:51:38.340297 | orchestrator | Sunday 01 June 2025 04:49:22 +0000 (0:00:01.781) 0:08:25.452 *********** 2025-06-01 04:51:38.340302 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.340306 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.340311 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.340315 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.340320 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.340324 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.340329 | orchestrator | 2025-06-01 04:51:38.340333 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-01 04:51:38.340338 | orchestrator | Sunday 01 June 2025 04:49:23 +0000 (0:00:00.943) 0:08:26.395 *********** 2025-06-01 04:51:38.340342 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.340348 | orchestrator | 2025-06-01 04:51:38.340352 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-01 04:51:38.340357 | orchestrator | Sunday 01 June 2025 04:49:25 +0000 (0:00:01.298) 0:08:27.694 *********** 2025-06-01 04:51:38.340361 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.340369 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.340374 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.340378 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.340383 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.340387 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.340392 | orchestrator | 2025-06-01 04:51:38.340397 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-01 04:51:38.340401 | orchestrator | Sunday 01 June 2025 04:49:26 +0000 (0:00:01.788) 0:08:29.482 *********** 2025-06-01 04:51:38.340406 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.340410 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.340415 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.340419 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.340424 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.340428 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.340433 | orchestrator | 2025-06-01 04:51:38.340437 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-01 04:51:38.340442 | orchestrator | Sunday 01 June 2025 04:49:30 +0000 (0:00:03.293) 0:08:32.775 *********** 2025-06-01 04:51:38.340449 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.340454 | orchestrator | 2025-06-01 04:51:38.340459 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-01 04:51:38.340463 | orchestrator | Sunday 01 June 2025 04:49:31 +0000 (0:00:01.264) 0:08:34.040 *********** 2025-06-01 04:51:38.340468 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340472 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.340477 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.340481 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340485 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340490 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340494 | orchestrator | 2025-06-01 04:51:38.340499 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-01 04:51:38.340503 | orchestrator | Sunday 01 June 2025 04:49:32 +0000 (0:00:00.867) 0:08:34.908 *********** 2025-06-01 04:51:38.340508 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:51:38.340525 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:51:38.340530 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:51:38.340535 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.340539 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.340544 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.340548 | orchestrator | 2025-06-01 04:51:38.340555 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-01 04:51:38.340560 | orchestrator | Sunday 01 June 2025 04:49:34 +0000 (0:00:02.390) 0:08:37.298 *********** 2025-06-01 04:51:38.340564 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:51:38.340569 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:51:38.340573 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:51:38.340578 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340582 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340587 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340591 | orchestrator | 2025-06-01 04:51:38.340596 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-01 04:51:38.340600 | orchestrator | 2025-06-01 04:51:38.340605 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.340610 | orchestrator | Sunday 01 June 2025 04:49:35 +0000 (0:00:01.121) 0:08:38.419 *********** 2025-06-01 04:51:38.340614 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.340619 | orchestrator | 2025-06-01 04:51:38.340623 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.340628 | orchestrator | Sunday 01 June 2025 04:49:36 +0000 (0:00:00.515) 0:08:38.934 *********** 2025-06-01 04:51:38.340636 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.340641 | orchestrator | 2025-06-01 04:51:38.340645 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.340650 | orchestrator | Sunday 01 June 2025 04:49:37 +0000 (0:00:00.858) 0:08:39.792 *********** 2025-06-01 04:51:38.340654 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340659 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340663 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340668 | orchestrator | 2025-06-01 04:51:38.340672 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.340677 | orchestrator | Sunday 01 June 2025 04:49:37 +0000 (0:00:00.302) 0:08:40.095 *********** 2025-06-01 04:51:38.340681 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340686 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340690 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340695 | orchestrator | 2025-06-01 04:51:38.340699 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.340704 | orchestrator | Sunday 01 June 2025 04:49:38 +0000 (0:00:00.675) 0:08:40.771 *********** 2025-06-01 04:51:38.340708 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340713 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340717 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340722 | orchestrator | 2025-06-01 04:51:38.340726 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.340731 | orchestrator | Sunday 01 June 2025 04:49:39 +0000 (0:00:00.886) 0:08:41.657 *********** 2025-06-01 04:51:38.340736 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340740 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340745 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340749 | orchestrator | 2025-06-01 04:51:38.340754 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.340758 | orchestrator | Sunday 01 June 2025 04:49:39 +0000 (0:00:00.715) 0:08:42.372 *********** 2025-06-01 04:51:38.340763 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340767 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340772 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340776 | orchestrator | 2025-06-01 04:51:38.340781 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.340786 | orchestrator | Sunday 01 June 2025 04:49:40 +0000 (0:00:00.269) 0:08:42.642 *********** 2025-06-01 04:51:38.340790 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340795 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340799 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340804 | orchestrator | 2025-06-01 04:51:38.340808 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.340813 | orchestrator | Sunday 01 June 2025 04:49:40 +0000 (0:00:00.285) 0:08:42.927 *********** 2025-06-01 04:51:38.340817 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340822 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340826 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340831 | orchestrator | 2025-06-01 04:51:38.340835 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.340840 | orchestrator | Sunday 01 June 2025 04:49:40 +0000 (0:00:00.480) 0:08:43.408 *********** 2025-06-01 04:51:38.340844 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340849 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340853 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340858 | orchestrator | 2025-06-01 04:51:38.340867 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.340871 | orchestrator | Sunday 01 June 2025 04:49:41 +0000 (0:00:00.682) 0:08:44.090 *********** 2025-06-01 04:51:38.340876 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340880 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340889 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340894 | orchestrator | 2025-06-01 04:51:38.340898 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.340903 | orchestrator | Sunday 01 June 2025 04:49:42 +0000 (0:00:00.728) 0:08:44.819 *********** 2025-06-01 04:51:38.340907 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340912 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340916 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340921 | orchestrator | 2025-06-01 04:51:38.340926 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.340930 | orchestrator | Sunday 01 June 2025 04:49:42 +0000 (0:00:00.292) 0:08:45.112 *********** 2025-06-01 04:51:38.340935 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.340939 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.340944 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.340948 | orchestrator | 2025-06-01 04:51:38.340953 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.340957 | orchestrator | Sunday 01 June 2025 04:49:43 +0000 (0:00:00.633) 0:08:45.745 *********** 2025-06-01 04:51:38.340964 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340969 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.340973 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.340978 | orchestrator | 2025-06-01 04:51:38.340982 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.340987 | orchestrator | Sunday 01 June 2025 04:49:43 +0000 (0:00:00.372) 0:08:46.118 *********** 2025-06-01 04:51:38.340991 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.340996 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341000 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341005 | orchestrator | 2025-06-01 04:51:38.341009 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.341014 | orchestrator | Sunday 01 June 2025 04:49:43 +0000 (0:00:00.372) 0:08:46.490 *********** 2025-06-01 04:51:38.341019 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.341023 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341027 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341032 | orchestrator | 2025-06-01 04:51:38.341037 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.341041 | orchestrator | Sunday 01 June 2025 04:49:44 +0000 (0:00:00.325) 0:08:46.816 *********** 2025-06-01 04:51:38.341046 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.341050 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.341055 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.341059 | orchestrator | 2025-06-01 04:51:38.341064 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.341068 | orchestrator | Sunday 01 June 2025 04:49:44 +0000 (0:00:00.629) 0:08:47.445 *********** 2025-06-01 04:51:38.341073 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.341077 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.341082 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.341086 | orchestrator | 2025-06-01 04:51:38.341091 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.341095 | orchestrator | Sunday 01 June 2025 04:49:45 +0000 (0:00:00.265) 0:08:47.711 *********** 2025-06-01 04:51:38.341100 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.341104 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.341109 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.341113 | orchestrator | 2025-06-01 04:51:38.341118 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.341122 | orchestrator | Sunday 01 June 2025 04:49:45 +0000 (0:00:00.279) 0:08:47.991 *********** 2025-06-01 04:51:38.341127 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.341131 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341136 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341140 | orchestrator | 2025-06-01 04:51:38.341148 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.341153 | orchestrator | Sunday 01 June 2025 04:49:45 +0000 (0:00:00.262) 0:08:48.253 *********** 2025-06-01 04:51:38.341157 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.341162 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341167 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341171 | orchestrator | 2025-06-01 04:51:38.341176 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-01 04:51:38.341180 | orchestrator | Sunday 01 June 2025 04:49:46 +0000 (0:00:00.678) 0:08:48.931 *********** 2025-06-01 04:51:38.341185 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.341189 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.341194 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-01 04:51:38.341198 | orchestrator | 2025-06-01 04:51:38.341203 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-01 04:51:38.341207 | orchestrator | Sunday 01 June 2025 04:49:46 +0000 (0:00:00.342) 0:08:49.273 *********** 2025-06-01 04:51:38.341212 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.341216 | orchestrator | 2025-06-01 04:51:38.341221 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-01 04:51:38.341225 | orchestrator | Sunday 01 June 2025 04:49:48 +0000 (0:00:02.013) 0:08:51.287 *********** 2025-06-01 04:51:38.341231 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-01 04:51:38.341237 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.341242 | orchestrator | 2025-06-01 04:51:38.341246 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-01 04:51:38.341251 | orchestrator | Sunday 01 June 2025 04:49:48 +0000 (0:00:00.184) 0:08:51.471 *********** 2025-06-01 04:51:38.341260 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:51:38.341270 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:51:38.341275 | orchestrator | 2025-06-01 04:51:38.341279 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-01 04:51:38.341284 | orchestrator | Sunday 01 June 2025 04:49:57 +0000 (0:00:08.712) 0:09:00.183 *********** 2025-06-01 04:51:38.341288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 04:51:38.341293 | orchestrator | 2025-06-01 04:51:38.341297 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-01 04:51:38.341302 | orchestrator | Sunday 01 June 2025 04:50:01 +0000 (0:00:03.506) 0:09:03.690 *********** 2025-06-01 04:51:38.341309 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.341313 | orchestrator | 2025-06-01 04:51:38.341318 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-01 04:51:38.341322 | orchestrator | Sunday 01 June 2025 04:50:01 +0000 (0:00:00.556) 0:09:04.246 *********** 2025-06-01 04:51:38.341327 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 04:51:38.341331 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 04:51:38.341336 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 04:51:38.341340 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-01 04:51:38.341349 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-01 04:51:38.341354 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-01 04:51:38.341358 | orchestrator | 2025-06-01 04:51:38.341363 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-01 04:51:38.341367 | orchestrator | Sunday 01 June 2025 04:50:02 +0000 (0:00:01.091) 0:09:05.338 *********** 2025-06-01 04:51:38.341372 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.341376 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 04:51:38.341381 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:51:38.341385 | orchestrator | 2025-06-01 04:51:38.341390 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-01 04:51:38.341394 | orchestrator | Sunday 01 June 2025 04:50:05 +0000 (0:00:02.510) 0:09:07.848 *********** 2025-06-01 04:51:38.341399 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 04:51:38.341404 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 04:51:38.341408 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341413 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 04:51:38.341417 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 04:51:38.341422 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341426 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 04:51:38.341431 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 04:51:38.341435 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341440 | orchestrator | 2025-06-01 04:51:38.341444 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-01 04:51:38.341449 | orchestrator | Sunday 01 June 2025 04:50:07 +0000 (0:00:01.718) 0:09:09.567 *********** 2025-06-01 04:51:38.341454 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341458 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341463 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341467 | orchestrator | 2025-06-01 04:51:38.341472 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-01 04:51:38.341476 | orchestrator | Sunday 01 June 2025 04:50:10 +0000 (0:00:03.041) 0:09:12.609 *********** 2025-06-01 04:51:38.341481 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.341485 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.341490 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.341494 | orchestrator | 2025-06-01 04:51:38.341499 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-01 04:51:38.341504 | orchestrator | Sunday 01 June 2025 04:50:10 +0000 (0:00:00.376) 0:09:12.985 *********** 2025-06-01 04:51:38.341508 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.341537 | orchestrator | 2025-06-01 04:51:38.341541 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-01 04:51:38.341546 | orchestrator | Sunday 01 June 2025 04:50:11 +0000 (0:00:00.680) 0:09:13.665 *********** 2025-06-01 04:51:38.341550 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.341555 | orchestrator | 2025-06-01 04:51:38.341560 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-01 04:51:38.341564 | orchestrator | Sunday 01 June 2025 04:50:11 +0000 (0:00:00.612) 0:09:14.278 *********** 2025-06-01 04:51:38.341569 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341573 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341578 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341582 | orchestrator | 2025-06-01 04:51:38.341587 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-01 04:51:38.341591 | orchestrator | Sunday 01 June 2025 04:50:13 +0000 (0:00:01.402) 0:09:15.681 *********** 2025-06-01 04:51:38.341601 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341605 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341610 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341614 | orchestrator | 2025-06-01 04:51:38.341619 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-01 04:51:38.341623 | orchestrator | Sunday 01 June 2025 04:50:14 +0000 (0:00:01.822) 0:09:17.503 *********** 2025-06-01 04:51:38.341628 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341632 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341637 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341641 | orchestrator | 2025-06-01 04:51:38.341646 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-01 04:51:38.341650 | orchestrator | Sunday 01 June 2025 04:50:17 +0000 (0:00:02.132) 0:09:19.636 *********** 2025-06-01 04:51:38.341655 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341660 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341664 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341669 | orchestrator | 2025-06-01 04:51:38.341673 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-01 04:51:38.341678 | orchestrator | Sunday 01 June 2025 04:50:19 +0000 (0:00:02.087) 0:09:21.724 *********** 2025-06-01 04:51:38.341682 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.341689 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341694 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341699 | orchestrator | 2025-06-01 04:51:38.341703 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 04:51:38.341708 | orchestrator | Sunday 01 June 2025 04:50:20 +0000 (0:00:01.662) 0:09:23.387 *********** 2025-06-01 04:51:38.341712 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341717 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341721 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341726 | orchestrator | 2025-06-01 04:51:38.341730 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-01 04:51:38.341735 | orchestrator | Sunday 01 June 2025 04:50:21 +0000 (0:00:00.773) 0:09:24.160 *********** 2025-06-01 04:51:38.341739 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.341744 | orchestrator | 2025-06-01 04:51:38.341748 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-01 04:51:38.341753 | orchestrator | Sunday 01 June 2025 04:50:22 +0000 (0:00:00.910) 0:09:25.071 *********** 2025-06-01 04:51:38.341757 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.341762 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341766 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341771 | orchestrator | 2025-06-01 04:51:38.341775 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-01 04:51:38.341780 | orchestrator | Sunday 01 June 2025 04:50:22 +0000 (0:00:00.347) 0:09:25.419 *********** 2025-06-01 04:51:38.341785 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.341789 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.341794 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.341801 | orchestrator | 2025-06-01 04:51:38.341809 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-01 04:51:38.341817 | orchestrator | Sunday 01 June 2025 04:50:24 +0000 (0:00:01.261) 0:09:26.680 *********** 2025-06-01 04:51:38.341824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.341832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.341839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.341847 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.341853 | orchestrator | 2025-06-01 04:51:38.341857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-01 04:51:38.341862 | orchestrator | Sunday 01 June 2025 04:50:25 +0000 (0:00:00.983) 0:09:27.664 *********** 2025-06-01 04:51:38.341870 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.341875 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.341923 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.341936 | orchestrator | 2025-06-01 04:51:38.341941 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-01 04:51:38.341945 | orchestrator | 2025-06-01 04:51:38.341950 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 04:51:38.341955 | orchestrator | Sunday 01 June 2025 04:50:25 +0000 (0:00:00.857) 0:09:28.522 *********** 2025-06-01 04:51:38.341959 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.341964 | orchestrator | 2025-06-01 04:51:38.341968 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 04:51:38.341973 | orchestrator | Sunday 01 June 2025 04:50:26 +0000 (0:00:00.548) 0:09:29.070 *********** 2025-06-01 04:51:38.341977 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.341982 | orchestrator | 2025-06-01 04:51:38.341987 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 04:51:38.341991 | orchestrator | Sunday 01 June 2025 04:50:27 +0000 (0:00:00.745) 0:09:29.815 *********** 2025-06-01 04:51:38.341995 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342000 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342005 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342009 | orchestrator | 2025-06-01 04:51:38.342029 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 04:51:38.342035 | orchestrator | Sunday 01 June 2025 04:50:27 +0000 (0:00:00.311) 0:09:30.127 *********** 2025-06-01 04:51:38.342040 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342044 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342049 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342053 | orchestrator | 2025-06-01 04:51:38.342058 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 04:51:38.342065 | orchestrator | Sunday 01 June 2025 04:50:28 +0000 (0:00:00.689) 0:09:30.816 *********** 2025-06-01 04:51:38.342069 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342074 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342078 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342082 | orchestrator | 2025-06-01 04:51:38.342086 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 04:51:38.342090 | orchestrator | Sunday 01 June 2025 04:50:28 +0000 (0:00:00.716) 0:09:31.533 *********** 2025-06-01 04:51:38.342094 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342098 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342102 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342106 | orchestrator | 2025-06-01 04:51:38.342110 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 04:51:38.342114 | orchestrator | Sunday 01 June 2025 04:50:30 +0000 (0:00:01.135) 0:09:32.668 *********** 2025-06-01 04:51:38.342119 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342123 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342127 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342131 | orchestrator | 2025-06-01 04:51:38.342135 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 04:51:38.342139 | orchestrator | Sunday 01 June 2025 04:50:30 +0000 (0:00:00.318) 0:09:32.987 *********** 2025-06-01 04:51:38.342143 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342147 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342151 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342155 | orchestrator | 2025-06-01 04:51:38.342163 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 04:51:38.342168 | orchestrator | Sunday 01 June 2025 04:50:30 +0000 (0:00:00.351) 0:09:33.338 *********** 2025-06-01 04:51:38.342176 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342180 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342184 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342188 | orchestrator | 2025-06-01 04:51:38.342192 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 04:51:38.342197 | orchestrator | Sunday 01 June 2025 04:50:31 +0000 (0:00:00.288) 0:09:33.627 *********** 2025-06-01 04:51:38.342201 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342205 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342209 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342213 | orchestrator | 2025-06-01 04:51:38.342217 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 04:51:38.342221 | orchestrator | Sunday 01 June 2025 04:50:32 +0000 (0:00:01.054) 0:09:34.681 *********** 2025-06-01 04:51:38.342225 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342229 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342233 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342237 | orchestrator | 2025-06-01 04:51:38.342241 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 04:51:38.342245 | orchestrator | Sunday 01 June 2025 04:50:32 +0000 (0:00:00.742) 0:09:35.424 *********** 2025-06-01 04:51:38.342249 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342253 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342258 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342262 | orchestrator | 2025-06-01 04:51:38.342266 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 04:51:38.342270 | orchestrator | Sunday 01 June 2025 04:50:33 +0000 (0:00:00.321) 0:09:35.746 *********** 2025-06-01 04:51:38.342274 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342278 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342282 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342286 | orchestrator | 2025-06-01 04:51:38.342290 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 04:51:38.342294 | orchestrator | Sunday 01 June 2025 04:50:33 +0000 (0:00:00.317) 0:09:36.064 *********** 2025-06-01 04:51:38.342298 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342302 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342306 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342310 | orchestrator | 2025-06-01 04:51:38.342315 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 04:51:38.342319 | orchestrator | Sunday 01 June 2025 04:50:34 +0000 (0:00:00.649) 0:09:36.713 *********** 2025-06-01 04:51:38.342323 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342327 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342331 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342335 | orchestrator | 2025-06-01 04:51:38.342339 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 04:51:38.342343 | orchestrator | Sunday 01 June 2025 04:50:34 +0000 (0:00:00.342) 0:09:37.056 *********** 2025-06-01 04:51:38.342347 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342351 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342355 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342359 | orchestrator | 2025-06-01 04:51:38.342363 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 04:51:38.342367 | orchestrator | Sunday 01 June 2025 04:50:34 +0000 (0:00:00.321) 0:09:37.377 *********** 2025-06-01 04:51:38.342371 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342375 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342379 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342383 | orchestrator | 2025-06-01 04:51:38.342387 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 04:51:38.342392 | orchestrator | Sunday 01 June 2025 04:50:35 +0000 (0:00:00.316) 0:09:37.693 *********** 2025-06-01 04:51:38.342396 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342400 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342410 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342414 | orchestrator | 2025-06-01 04:51:38.342418 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 04:51:38.342422 | orchestrator | Sunday 01 June 2025 04:50:35 +0000 (0:00:00.643) 0:09:38.337 *********** 2025-06-01 04:51:38.342426 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342430 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342434 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342438 | orchestrator | 2025-06-01 04:51:38.342442 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 04:51:38.342446 | orchestrator | Sunday 01 June 2025 04:50:36 +0000 (0:00:00.321) 0:09:38.658 *********** 2025-06-01 04:51:38.342450 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342457 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342461 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342465 | orchestrator | 2025-06-01 04:51:38.342469 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 04:51:38.342473 | orchestrator | Sunday 01 June 2025 04:50:36 +0000 (0:00:00.329) 0:09:38.988 *********** 2025-06-01 04:51:38.342477 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.342481 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.342486 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.342490 | orchestrator | 2025-06-01 04:51:38.342494 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-01 04:51:38.342498 | orchestrator | Sunday 01 June 2025 04:50:37 +0000 (0:00:00.883) 0:09:39.871 *********** 2025-06-01 04:51:38.342502 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.342506 | orchestrator | 2025-06-01 04:51:38.342521 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-01 04:51:38.342526 | orchestrator | Sunday 01 June 2025 04:50:37 +0000 (0:00:00.535) 0:09:40.407 *********** 2025-06-01 04:51:38.342530 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342534 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 04:51:38.342538 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:51:38.342542 | orchestrator | 2025-06-01 04:51:38.342548 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-01 04:51:38.342553 | orchestrator | Sunday 01 June 2025 04:50:39 +0000 (0:00:02.143) 0:09:42.551 *********** 2025-06-01 04:51:38.342557 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 04:51:38.342561 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 04:51:38.342565 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.342569 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 04:51:38.342573 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 04:51:38.342577 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.342581 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 04:51:38.342585 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 04:51:38.342589 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.342593 | orchestrator | 2025-06-01 04:51:38.342598 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-01 04:51:38.342602 | orchestrator | Sunday 01 June 2025 04:50:41 +0000 (0:00:01.516) 0:09:44.067 *********** 2025-06-01 04:51:38.342606 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342610 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342614 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342618 | orchestrator | 2025-06-01 04:51:38.342622 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-01 04:51:38.342626 | orchestrator | Sunday 01 June 2025 04:50:41 +0000 (0:00:00.316) 0:09:44.383 *********** 2025-06-01 04:51:38.342630 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.342638 | orchestrator | 2025-06-01 04:51:38.342642 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-01 04:51:38.342646 | orchestrator | Sunday 01 June 2025 04:50:42 +0000 (0:00:00.553) 0:09:44.937 *********** 2025-06-01 04:51:38.342650 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.342655 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.342659 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.342663 | orchestrator | 2025-06-01 04:51:38.342667 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-01 04:51:38.342671 | orchestrator | Sunday 01 June 2025 04:50:43 +0000 (0:00:01.397) 0:09:46.334 *********** 2025-06-01 04:51:38.342675 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342679 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 04:51:38.342683 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342687 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 04:51:38.342692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342696 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 04:51:38.342700 | orchestrator | 2025-06-01 04:51:38.342704 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-01 04:51:38.342708 | orchestrator | Sunday 01 June 2025 04:50:47 +0000 (0:00:04.192) 0:09:50.527 *********** 2025-06-01 04:51:38.342712 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342716 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:51:38.342720 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342724 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:51:38.342731 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:51:38.342735 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:51:38.342740 | orchestrator | 2025-06-01 04:51:38.342744 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-01 04:51:38.342748 | orchestrator | Sunday 01 June 2025 04:50:50 +0000 (0:00:02.239) 0:09:52.766 *********** 2025-06-01 04:51:38.342752 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 04:51:38.342756 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.342760 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 04:51:38.342764 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.342768 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 04:51:38.342772 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.342776 | orchestrator | 2025-06-01 04:51:38.342780 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-01 04:51:38.342785 | orchestrator | Sunday 01 June 2025 04:50:51 +0000 (0:00:01.275) 0:09:54.042 *********** 2025-06-01 04:51:38.342789 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-01 04:51:38.342793 | orchestrator | 2025-06-01 04:51:38.342797 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-01 04:51:38.342803 | orchestrator | Sunday 01 June 2025 04:50:51 +0000 (0:00:00.210) 0:09:54.252 *********** 2025-06-01 04:51:38.342812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342833 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342837 | orchestrator | 2025-06-01 04:51:38.342841 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-01 04:51:38.342846 | orchestrator | Sunday 01 June 2025 04:50:52 +0000 (0:00:01.245) 0:09:55.497 *********** 2025-06-01 04:51:38.342850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 04:51:38.342870 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342874 | orchestrator | 2025-06-01 04:51:38.342879 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-01 04:51:38.342883 | orchestrator | Sunday 01 June 2025 04:50:53 +0000 (0:00:00.597) 0:09:56.095 *********** 2025-06-01 04:51:38.342887 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 04:51:38.342891 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 04:51:38.342895 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 04:51:38.342899 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 04:51:38.342903 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 04:51:38.342907 | orchestrator | 2025-06-01 04:51:38.342911 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-01 04:51:38.342916 | orchestrator | Sunday 01 June 2025 04:51:24 +0000 (0:00:30.713) 0:10:26.808 *********** 2025-06-01 04:51:38.342920 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342924 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342928 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342932 | orchestrator | 2025-06-01 04:51:38.342936 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-01 04:51:38.342940 | orchestrator | Sunday 01 June 2025 04:51:24 +0000 (0:00:00.325) 0:10:27.133 *********** 2025-06-01 04:51:38.342944 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.342953 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.342957 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.342962 | orchestrator | 2025-06-01 04:51:38.342966 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-01 04:51:38.342970 | orchestrator | Sunday 01 June 2025 04:51:24 +0000 (0:00:00.317) 0:10:27.450 *********** 2025-06-01 04:51:38.342974 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.342978 | orchestrator | 2025-06-01 04:51:38.342982 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-01 04:51:38.342986 | orchestrator | Sunday 01 June 2025 04:51:25 +0000 (0:00:00.792) 0:10:28.243 *********** 2025-06-01 04:51:38.342990 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.342994 | orchestrator | 2025-06-01 04:51:38.342998 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-01 04:51:38.343002 | orchestrator | Sunday 01 June 2025 04:51:26 +0000 (0:00:00.569) 0:10:28.812 *********** 2025-06-01 04:51:38.343007 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.343011 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.343015 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.343019 | orchestrator | 2025-06-01 04:51:38.343025 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-01 04:51:38.343029 | orchestrator | Sunday 01 June 2025 04:51:27 +0000 (0:00:01.264) 0:10:30.076 *********** 2025-06-01 04:51:38.343033 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.343037 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.343041 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.343045 | orchestrator | 2025-06-01 04:51:38.343050 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-01 04:51:38.343054 | orchestrator | Sunday 01 June 2025 04:51:29 +0000 (0:00:01.523) 0:10:31.600 *********** 2025-06-01 04:51:38.343058 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:51:38.343062 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:51:38.343066 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:51:38.343070 | orchestrator | 2025-06-01 04:51:38.343074 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-01 04:51:38.343078 | orchestrator | Sunday 01 June 2025 04:51:30 +0000 (0:00:01.774) 0:10:33.374 *********** 2025-06-01 04:51:38.343082 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.343086 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.343090 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 04:51:38.343095 | orchestrator | 2025-06-01 04:51:38.343099 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 04:51:38.343103 | orchestrator | Sunday 01 June 2025 04:51:33 +0000 (0:00:02.628) 0:10:36.003 *********** 2025-06-01 04:51:38.343107 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.343111 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.343115 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.343119 | orchestrator | 2025-06-01 04:51:38.343123 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-01 04:51:38.343127 | orchestrator | Sunday 01 June 2025 04:51:33 +0000 (0:00:00.348) 0:10:36.352 *********** 2025-06-01 04:51:38.343131 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:51:38.343135 | orchestrator | 2025-06-01 04:51:38.343140 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-01 04:51:38.343144 | orchestrator | Sunday 01 June 2025 04:51:34 +0000 (0:00:00.518) 0:10:36.871 *********** 2025-06-01 04:51:38.343151 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.343155 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.343159 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.343163 | orchestrator | 2025-06-01 04:51:38.343167 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-01 04:51:38.343171 | orchestrator | Sunday 01 June 2025 04:51:34 +0000 (0:00:00.602) 0:10:37.474 *********** 2025-06-01 04:51:38.343175 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.343180 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:51:38.343184 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:51:38.343188 | orchestrator | 2025-06-01 04:51:38.343192 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-01 04:51:38.343196 | orchestrator | Sunday 01 June 2025 04:51:35 +0000 (0:00:00.336) 0:10:37.810 *********** 2025-06-01 04:51:38.343200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:51:38.343204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:51:38.343208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:51:38.343212 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:51:38.343216 | orchestrator | 2025-06-01 04:51:38.343220 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-01 04:51:38.343224 | orchestrator | Sunday 01 June 2025 04:51:35 +0000 (0:00:00.616) 0:10:38.427 *********** 2025-06-01 04:51:38.343229 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:51:38.343233 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:51:38.343237 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:51:38.343241 | orchestrator | 2025-06-01 04:51:38.343245 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:51:38.343249 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-01 04:51:38.343255 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-01 04:51:38.343260 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-01 04:51:38.343264 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-01 04:51:38.343268 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-01 04:51:38.343272 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-01 04:51:38.343276 | orchestrator | 2025-06-01 04:51:38.343280 | orchestrator | 2025-06-01 04:51:38.343284 | orchestrator | 2025-06-01 04:51:38.343288 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:51:38.343293 | orchestrator | Sunday 01 June 2025 04:51:36 +0000 (0:00:00.263) 0:10:38.691 *********** 2025-06-01 04:51:38.343299 | orchestrator | =============================================================================== 2025-06-01 04:51:38.343303 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 54.33s 2025-06-01 04:51:38.343307 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.09s 2025-06-01 04:51:38.343311 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.71s 2025-06-01 04:51:38.343316 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.88s 2025-06-01 04:51:38.343320 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.86s 2025-06-01 04:51:38.343324 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.84s 2025-06-01 04:51:38.343328 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.74s 2025-06-01 04:51:38.343335 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.91s 2025-06-01 04:51:38.343339 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.68s 2025-06-01 04:51:38.343343 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.71s 2025-06-01 04:51:38.343347 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.31s 2025-06-01 04:51:38.343351 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.25s 2025-06-01 04:51:38.343355 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.62s 2025-06-01 04:51:38.343359 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.19s 2025-06-01 04:51:38.343363 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.82s 2025-06-01 04:51:38.343367 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.64s 2025-06-01 04:51:38.343371 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.51s 2025-06-01 04:51:38.343376 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.39s 2025-06-01 04:51:38.343380 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.29s 2025-06-01 04:51:38.343384 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.07s 2025-06-01 04:51:38.343388 | orchestrator | 2025-06-01 04:51:38 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:38.343392 | orchestrator | 2025-06-01 04:51:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:41.368423 | orchestrator | 2025-06-01 04:51:41 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:41.370109 | orchestrator | 2025-06-01 04:51:41 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:41.371939 | orchestrator | 2025-06-01 04:51:41 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:41.372390 | orchestrator | 2025-06-01 04:51:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:44.435602 | orchestrator | 2025-06-01 04:51:44 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:44.437627 | orchestrator | 2025-06-01 04:51:44 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:44.439923 | orchestrator | 2025-06-01 04:51:44 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:44.440019 | orchestrator | 2025-06-01 04:51:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:47.480826 | orchestrator | 2025-06-01 04:51:47 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:47.481855 | orchestrator | 2025-06-01 04:51:47 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:47.485748 | orchestrator | 2025-06-01 04:51:47 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:47.485936 | orchestrator | 2025-06-01 04:51:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:50.545313 | orchestrator | 2025-06-01 04:51:50 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:50.548896 | orchestrator | 2025-06-01 04:51:50 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:50.548973 | orchestrator | 2025-06-01 04:51:50 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:50.548990 | orchestrator | 2025-06-01 04:51:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:53.587797 | orchestrator | 2025-06-01 04:51:53 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:53.589220 | orchestrator | 2025-06-01 04:51:53 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:53.590981 | orchestrator | 2025-06-01 04:51:53 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:53.591205 | orchestrator | 2025-06-01 04:51:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:56.640136 | orchestrator | 2025-06-01 04:51:56 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:56.640600 | orchestrator | 2025-06-01 04:51:56 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:56.643931 | orchestrator | 2025-06-01 04:51:56 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:56.644002 | orchestrator | 2025-06-01 04:51:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:51:59.682755 | orchestrator | 2025-06-01 04:51:59 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:51:59.683209 | orchestrator | 2025-06-01 04:51:59 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:51:59.684203 | orchestrator | 2025-06-01 04:51:59 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:51:59.684221 | orchestrator | 2025-06-01 04:51:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:02.727808 | orchestrator | 2025-06-01 04:52:02 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:02.730145 | orchestrator | 2025-06-01 04:52:02 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:02.731585 | orchestrator | 2025-06-01 04:52:02 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:02.731748 | orchestrator | 2025-06-01 04:52:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:05.778450 | orchestrator | 2025-06-01 04:52:05 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:05.779898 | orchestrator | 2025-06-01 04:52:05 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:05.781749 | orchestrator | 2025-06-01 04:52:05 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:05.781795 | orchestrator | 2025-06-01 04:52:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:08.834140 | orchestrator | 2025-06-01 04:52:08 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:08.835411 | orchestrator | 2025-06-01 04:52:08 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:08.837836 | orchestrator | 2025-06-01 04:52:08 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:08.837863 | orchestrator | 2025-06-01 04:52:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:11.887318 | orchestrator | 2025-06-01 04:52:11 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:11.888862 | orchestrator | 2025-06-01 04:52:11 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:11.890384 | orchestrator | 2025-06-01 04:52:11 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:11.890475 | orchestrator | 2025-06-01 04:52:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:14.932995 | orchestrator | 2025-06-01 04:52:14 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:14.934879 | orchestrator | 2025-06-01 04:52:14 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:14.936913 | orchestrator | 2025-06-01 04:52:14 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:14.936969 | orchestrator | 2025-06-01 04:52:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:17.981259 | orchestrator | 2025-06-01 04:52:17 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:17.983170 | orchestrator | 2025-06-01 04:52:17 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:17.985074 | orchestrator | 2025-06-01 04:52:17 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:17.985441 | orchestrator | 2025-06-01 04:52:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:21.044675 | orchestrator | 2025-06-01 04:52:21 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:21.046387 | orchestrator | 2025-06-01 04:52:21 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:21.048142 | orchestrator | 2025-06-01 04:52:21 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:21.048635 | orchestrator | 2025-06-01 04:52:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:24.095730 | orchestrator | 2025-06-01 04:52:24 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:24.097661 | orchestrator | 2025-06-01 04:52:24 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:24.098468 | orchestrator | 2025-06-01 04:52:24 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:24.098519 | orchestrator | 2025-06-01 04:52:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:27.158701 | orchestrator | 2025-06-01 04:52:27 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:27.160341 | orchestrator | 2025-06-01 04:52:27 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:27.162283 | orchestrator | 2025-06-01 04:52:27 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:27.162551 | orchestrator | 2025-06-01 04:52:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:30.208166 | orchestrator | 2025-06-01 04:52:30 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:30.209649 | orchestrator | 2025-06-01 04:52:30 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:30.211009 | orchestrator | 2025-06-01 04:52:30 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:30.211104 | orchestrator | 2025-06-01 04:52:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:33.252467 | orchestrator | 2025-06-01 04:52:33 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state STARTED 2025-06-01 04:52:33.253540 | orchestrator | 2025-06-01 04:52:33 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:33.255702 | orchestrator | 2025-06-01 04:52:33 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:33.255748 | orchestrator | 2025-06-01 04:52:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:36.305969 | orchestrator | 2025-06-01 04:52:36.306135 | orchestrator | 2025-06-01 04:52:36.306156 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:52:36.306169 | orchestrator | 2025-06-01 04:52:36.306200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:52:36.306213 | orchestrator | Sunday 01 June 2025 04:49:44 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-01 04:52:36.306224 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:36.306236 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:36.306247 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:36.306257 | orchestrator | 2025-06-01 04:52:36.306268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:52:36.306279 | orchestrator | Sunday 01 June 2025 04:49:44 +0000 (0:00:00.302) 0:00:00.561 *********** 2025-06-01 04:52:36.306290 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-01 04:52:36.306302 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-01 04:52:36.306312 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-01 04:52:36.306323 | orchestrator | 2025-06-01 04:52:36.306334 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-01 04:52:36.306345 | orchestrator | 2025-06-01 04:52:36.306355 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 04:52:36.306366 | orchestrator | Sunday 01 June 2025 04:49:45 +0000 (0:00:00.415) 0:00:00.977 *********** 2025-06-01 04:52:36.306377 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:52:36.306388 | orchestrator | 2025-06-01 04:52:36.306399 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-01 04:52:36.306410 | orchestrator | Sunday 01 June 2025 04:49:45 +0000 (0:00:00.464) 0:00:01.442 *********** 2025-06-01 04:52:36.306434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 04:52:36.306446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 04:52:36.306456 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 04:52:36.306467 | orchestrator | 2025-06-01 04:52:36.306478 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-01 04:52:36.306489 | orchestrator | Sunday 01 June 2025 04:49:46 +0000 (0:00:00.634) 0:00:02.076 *********** 2025-06-01 04:52:36.306502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.306517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.306549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.306571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.306631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.306646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.306659 | orchestrator | 2025-06-01 04:52:36.306677 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 04:52:36.306688 | orchestrator | Sunday 01 June 2025 04:49:48 +0000 (0:00:01.540) 0:00:03.616 *********** 2025-06-01 04:52:36.306698 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:52:36.306709 | orchestrator | 2025-06-01 04:52:36.306720 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-01 04:52:36.306730 | orchestrator | Sunday 01 June 2025 04:49:48 +0000 (0:00:00.474) 0:00:04.091 *********** 2025-06-01 04:52:36.306751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.306764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.306781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.306793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.306819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.306833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.306844 | orchestrator | 2025-06-01 04:52:36.306855 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-01 04:52:36.306871 | orchestrator | Sunday 01 June 2025 04:49:50 +0000 (0:00:02.350) 0:00:06.441 *********** 2025-06-01 04:52:36.306883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:52:36.306895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:52:36.306914 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:36.306926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:52:36.306946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:52:36.306958 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:36.306975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:52:36.306987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:52:36.307005 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:36.307017 | orchestrator | 2025-06-01 04:52:36.307028 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-01 04:52:36.307038 | orchestrator | Sunday 01 June 2025 04:49:52 +0000 (0:00:01.437) 0:00:07.879 *********** 2025-06-01 04:52:36.307050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:52:36.307069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:52:36.307081 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:36.307098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:52:36.307110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:52:36.307128 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:36.307153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 04:52:36.307174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 04:52:36.307186 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:36.307197 | orchestrator | 2025-06-01 04:52:36.307207 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-01 04:52:36.307218 | orchestrator | Sunday 01 June 2025 04:49:53 +0000 (0:00:00.936) 0:00:08.816 *********** 2025-06-01 04:52:36.307234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.307246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.307270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.307289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.307307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.307319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.307337 | orchestrator | 2025-06-01 04:52:36.307348 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-01 04:52:36.307359 | orchestrator | Sunday 01 June 2025 04:49:55 +0000 (0:00:02.350) 0:00:11.166 *********** 2025-06-01 04:52:36.307370 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:36.307381 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:36.307392 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:36.307403 | orchestrator | 2025-06-01 04:52:36.307414 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-01 04:52:36.307425 | orchestrator | Sunday 01 June 2025 04:49:58 +0000 (0:00:03.162) 0:00:14.328 *********** 2025-06-01 04:52:36.307435 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:36.307446 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:36.307457 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:36.307471 | orchestrator | 2025-06-01 04:52:36.307482 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-01 04:52:36.307493 | orchestrator | Sunday 01 June 2025 04:50:00 +0000 (0:00:01.594) 0:00:15.923 *********** 2025-06-01 04:52:36.307504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.307522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'co2025-06-01 04:52:36 | INFO  | Task ef85be3b-90f3-421f-b23f-a0cbab1466ca is in state SUCCESS 2025-06-01 04:52:36.307535 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.307552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 04:52:36.307571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.307623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.307647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 04:52:36.307660 | orchestrator | 2025-06-01 04:52:36.307671 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 04:52:36.307682 | orchestrator | Sunday 01 June 2025 04:50:02 +0000 (0:00:02.032) 0:00:17.956 *********** 2025-06-01 04:52:36.307692 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:36.307703 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:36.307714 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:36.307725 | orchestrator | 2025-06-01 04:52:36.307735 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 04:52:36.307754 | orchestrator | Sunday 01 June 2025 04:50:02 +0000 (0:00:00.345) 0:00:18.302 *********** 2025-06-01 04:52:36.307765 | orchestrator | 2025-06-01 04:52:36.307781 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 04:52:36.307792 | orchestrator | Sunday 01 June 2025 04:50:02 +0000 (0:00:00.067) 0:00:18.369 *********** 2025-06-01 04:52:36.307802 | orchestrator | 2025-06-01 04:52:36.307813 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 04:52:36.307824 | orchestrator | Sunday 01 June 2025 04:50:02 +0000 (0:00:00.092) 0:00:18.462 *********** 2025-06-01 04:52:36.307835 | orchestrator | 2025-06-01 04:52:36.307846 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-01 04:52:36.307856 | orchestrator | Sunday 01 June 2025 04:50:03 +0000 (0:00:00.423) 0:00:18.886 *********** 2025-06-01 04:52:36.307867 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:36.307878 | orchestrator | 2025-06-01 04:52:36.307888 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-01 04:52:36.307899 | orchestrator | Sunday 01 June 2025 04:50:03 +0000 (0:00:00.193) 0:00:19.079 *********** 2025-06-01 04:52:36.307910 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:36.307921 | orchestrator | 2025-06-01 04:52:36.307932 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-01 04:52:36.307942 | orchestrator | Sunday 01 June 2025 04:50:03 +0000 (0:00:00.224) 0:00:19.303 *********** 2025-06-01 04:52:36.307953 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:36.307964 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:36.307974 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:36.307985 | orchestrator | 2025-06-01 04:52:36.307996 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-01 04:52:36.308007 | orchestrator | Sunday 01 June 2025 04:51:08 +0000 (0:01:04.440) 0:01:23.744 *********** 2025-06-01 04:52:36.308017 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:36.308028 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:36.308039 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:36.308049 | orchestrator | 2025-06-01 04:52:36.308060 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 04:52:36.308071 | orchestrator | Sunday 01 June 2025 04:52:25 +0000 (0:01:17.511) 0:02:41.255 *********** 2025-06-01 04:52:36.308082 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:52:36.308093 | orchestrator | 2025-06-01 04:52:36.308103 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-01 04:52:36.308114 | orchestrator | Sunday 01 June 2025 04:52:26 +0000 (0:00:00.706) 0:02:41.961 *********** 2025-06-01 04:52:36.308125 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:36.308136 | orchestrator | 2025-06-01 04:52:36.308147 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-01 04:52:36.308157 | orchestrator | Sunday 01 June 2025 04:52:28 +0000 (0:00:02.295) 0:02:44.256 *********** 2025-06-01 04:52:36.308168 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:36.308179 | orchestrator | 2025-06-01 04:52:36.308189 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-01 04:52:36.308200 | orchestrator | Sunday 01 June 2025 04:52:30 +0000 (0:00:01.991) 0:02:46.247 *********** 2025-06-01 04:52:36.308211 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:36.308222 | orchestrator | 2025-06-01 04:52:36.308232 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-01 04:52:36.308243 | orchestrator | Sunday 01 June 2025 04:52:33 +0000 (0:00:02.535) 0:02:48.783 *********** 2025-06-01 04:52:36.308254 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:36.308264 | orchestrator | 2025-06-01 04:52:36.308275 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:52:36.308287 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 04:52:36.308305 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 04:52:36.308323 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 04:52:36.308334 | orchestrator | 2025-06-01 04:52:36.308345 | orchestrator | 2025-06-01 04:52:36.308356 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:52:36.308366 | orchestrator | Sunday 01 June 2025 04:52:35 +0000 (0:00:02.524) 0:02:51.307 *********** 2025-06-01 04:52:36.308377 | orchestrator | =============================================================================== 2025-06-01 04:52:36.308388 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.51s 2025-06-01 04:52:36.308399 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.44s 2025-06-01 04:52:36.308410 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.16s 2025-06-01 04:52:36.308420 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.54s 2025-06-01 04:52:36.308431 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.52s 2025-06-01 04:52:36.308442 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.35s 2025-06-01 04:52:36.308453 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.35s 2025-06-01 04:52:36.308463 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.30s 2025-06-01 04:52:36.308474 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.03s 2025-06-01 04:52:36.308484 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 1.99s 2025-06-01 04:52:36.308495 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.59s 2025-06-01 04:52:36.308506 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.54s 2025-06-01 04:52:36.308521 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.44s 2025-06-01 04:52:36.308532 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.94s 2025-06-01 04:52:36.308543 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-06-01 04:52:36.308554 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.63s 2025-06-01 04:52:36.308564 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.58s 2025-06-01 04:52:36.308575 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-06-01 04:52:36.308603 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-06-01 04:52:36.308614 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-06-01 04:52:36.308625 | orchestrator | 2025-06-01 04:52:36 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:36.308635 | orchestrator | 2025-06-01 04:52:36 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:36.308646 | orchestrator | 2025-06-01 04:52:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:39.348237 | orchestrator | 2025-06-01 04:52:39 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:39.349521 | orchestrator | 2025-06-01 04:52:39 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:39.349555 | orchestrator | 2025-06-01 04:52:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:42.402838 | orchestrator | 2025-06-01 04:52:42 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:42.404460 | orchestrator | 2025-06-01 04:52:42 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:42.404696 | orchestrator | 2025-06-01 04:52:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:45.453248 | orchestrator | 2025-06-01 04:52:45 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:45.455663 | orchestrator | 2025-06-01 04:52:45 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:45.455731 | orchestrator | 2025-06-01 04:52:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:48.498369 | orchestrator | 2025-06-01 04:52:48 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:48.499767 | orchestrator | 2025-06-01 04:52:48 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:48.499808 | orchestrator | 2025-06-01 04:52:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:51.537912 | orchestrator | 2025-06-01 04:52:51 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:51.538014 | orchestrator | 2025-06-01 04:52:51 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:51.538091 | orchestrator | 2025-06-01 04:52:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:54.587236 | orchestrator | 2025-06-01 04:52:54 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state STARTED 2025-06-01 04:52:54.590162 | orchestrator | 2025-06-01 04:52:54 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:54.590341 | orchestrator | 2025-06-01 04:52:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:52:57.648820 | orchestrator | 2025-06-01 04:52:57.649253 | orchestrator | 2025-06-01 04:52:57.649284 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-01 04:52:57.649296 | orchestrator | 2025-06-01 04:52:57.649306 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-01 04:52:57.649316 | orchestrator | Sunday 01 June 2025 04:49:44 +0000 (0:00:00.081) 0:00:00.081 *********** 2025-06-01 04:52:57.649326 | orchestrator | ok: [localhost] => { 2025-06-01 04:52:57.649338 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-01 04:52:57.649348 | orchestrator | } 2025-06-01 04:52:57.649359 | orchestrator | 2025-06-01 04:52:57.649369 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-01 04:52:57.649379 | orchestrator | Sunday 01 June 2025 04:49:44 +0000 (0:00:00.044) 0:00:00.125 *********** 2025-06-01 04:52:57.649389 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-01 04:52:57.649401 | orchestrator | ...ignoring 2025-06-01 04:52:57.649411 | orchestrator | 2025-06-01 04:52:57.649421 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-01 04:52:57.649431 | orchestrator | Sunday 01 June 2025 04:49:47 +0000 (0:00:02.751) 0:00:02.877 *********** 2025-06-01 04:52:57.649441 | orchestrator | skipping: [localhost] 2025-06-01 04:52:57.649451 | orchestrator | 2025-06-01 04:52:57.649461 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-01 04:52:57.649470 | orchestrator | Sunday 01 June 2025 04:49:47 +0000 (0:00:00.045) 0:00:02.923 *********** 2025-06-01 04:52:57.649496 | orchestrator | ok: [localhost] 2025-06-01 04:52:57.649507 | orchestrator | 2025-06-01 04:52:57.649516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:52:57.649526 | orchestrator | 2025-06-01 04:52:57.649536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:52:57.649546 | orchestrator | Sunday 01 June 2025 04:49:47 +0000 (0:00:00.130) 0:00:03.053 *********** 2025-06-01 04:52:57.649579 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.649589 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.649599 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.649635 | orchestrator | 2025-06-01 04:52:57.649646 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:52:57.649656 | orchestrator | Sunday 01 June 2025 04:49:47 +0000 (0:00:00.296) 0:00:03.349 *********** 2025-06-01 04:52:57.649666 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 04:52:57.649676 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 04:52:57.649691 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 04:52:57.649707 | orchestrator | 2025-06-01 04:52:57.649722 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 04:52:57.649735 | orchestrator | 2025-06-01 04:52:57.649748 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 04:52:57.649761 | orchestrator | Sunday 01 June 2025 04:49:48 +0000 (0:00:00.747) 0:00:04.097 *********** 2025-06-01 04:52:57.649775 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 04:52:57.649788 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 04:52:57.649803 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 04:52:57.649811 | orchestrator | 2025-06-01 04:52:57.649819 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 04:52:57.649827 | orchestrator | Sunday 01 June 2025 04:49:49 +0000 (0:00:00.449) 0:00:04.547 *********** 2025-06-01 04:52:57.649834 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:52:57.649843 | orchestrator | 2025-06-01 04:52:57.649851 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-01 04:52:57.649859 | orchestrator | Sunday 01 June 2025 04:49:49 +0000 (0:00:00.462) 0:00:05.010 *********** 2025-06-01 04:52:57.649890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.649908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.649928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.649938 | orchestrator | 2025-06-01 04:52:57.649954 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-01 04:52:57.649962 | orchestrator | Sunday 01 June 2025 04:49:52 +0000 (0:00:03.207) 0:00:08.217 *********** 2025-06-01 04:52:57.649970 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.649979 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.649987 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.649995 | orchestrator | 2025-06-01 04:52:57.650003 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-01 04:52:57.650010 | orchestrator | Sunday 01 June 2025 04:49:53 +0000 (0:00:00.668) 0:00:08.886 *********** 2025-06-01 04:52:57.650097 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.650110 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.650124 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.650137 | orchestrator | 2025-06-01 04:52:57.650149 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-01 04:52:57.650163 | orchestrator | Sunday 01 June 2025 04:49:54 +0000 (0:00:01.451) 0:00:10.337 *********** 2025-06-01 04:52:57.650184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.650208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.650229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.650238 | orchestrator | 2025-06-01 04:52:57.650246 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-01 04:52:57.650254 | orchestrator | Sunday 01 June 2025 04:49:59 +0000 (0:00:04.154) 0:00:14.492 *********** 2025-06-01 04:52:57.650262 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.650269 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.650277 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.650285 | orchestrator | 2025-06-01 04:52:57.650293 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-01 04:52:57.650301 | orchestrator | Sunday 01 June 2025 04:50:00 +0000 (0:00:01.071) 0:00:15.563 *********** 2025-06-01 04:52:57.650311 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.650323 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:57.650336 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:57.650348 | orchestrator | 2025-06-01 04:52:57.650361 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 04:52:57.650374 | orchestrator | Sunday 01 June 2025 04:50:04 +0000 (0:00:04.241) 0:00:19.805 *********** 2025-06-01 04:52:57.650388 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:52:57.650509 | orchestrator | 2025-06-01 04:52:57.650518 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-01 04:52:57.650526 | orchestrator | Sunday 01 June 2025 04:50:05 +0000 (0:00:00.840) 0:00:20.645 *********** 2025-06-01 04:52:57.650545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650563 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.650577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650585 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.650600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650639 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.650667 | orchestrator | 2025-06-01 04:52:57.650675 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-01 04:52:57.650683 | orchestrator | Sunday 01 June 2025 04:50:08 +0000 (0:00:03.107) 0:00:23.753 *********** 2025-06-01 04:52:57.650696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650705 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.650719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650733 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.650745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650754 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.650762 | orchestrator | 2025-06-01 04:52:57.650769 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-01 04:52:57.650777 | orchestrator | Sunday 01 June 2025 04:50:10 +0000 (0:00:02.439) 0:00:26.192 *********** 2025-06-01 04:52:57.650786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650804 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.650823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650832 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.650841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 04:52:57.650855 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.650863 | orchestrator | 2025-06-01 04:52:57.650871 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-01 04:52:57.650879 | orchestrator | Sunday 01 June 2025 04:50:14 +0000 (0:00:03.362) 0:00:29.555 *********** 2025-06-01 04:52:57.650893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-06-01 04:52:57 | INFO  | Task ec4a5992-a295-4617-9076-a88903194c9a is in state SUCCESS 2025-06-01 04:52:57.650907 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.650918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.650953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 04:52:57.650969 | orchestrator | 2025-06-01 04:52:57.650981 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-01 04:52:57.650993 | orchestrator | Sunday 01 June 2025 04:50:17 +0000 (0:00:03.306) 0:00:32.862 *********** 2025-06-01 04:52:57.651005 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.651017 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:57.651031 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:57.651044 | orchestrator | 2025-06-01 04:52:57.651057 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-01 04:52:57.651071 | orchestrator | Sunday 01 June 2025 04:50:19 +0000 (0:00:01.505) 0:00:34.367 *********** 2025-06-01 04:52:57.651084 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651098 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.651110 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.651123 | orchestrator | 2025-06-01 04:52:57.651136 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-01 04:52:57.651150 | orchestrator | Sunday 01 June 2025 04:50:19 +0000 (0:00:00.538) 0:00:34.906 *********** 2025-06-01 04:52:57.651164 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651178 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.651193 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.651206 | orchestrator | 2025-06-01 04:52:57.651219 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-01 04:52:57.651233 | orchestrator | Sunday 01 June 2025 04:50:19 +0000 (0:00:00.373) 0:00:35.280 *********** 2025-06-01 04:52:57.651259 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-01 04:52:57.651272 | orchestrator | ...ignoring 2025-06-01 04:52:57.651281 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-01 04:52:57.651290 | orchestrator | ...ignoring 2025-06-01 04:52:57.651299 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-01 04:52:57.651308 | orchestrator | ...ignoring 2025-06-01 04:52:57.651317 | orchestrator | 2025-06-01 04:52:57.651326 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-01 04:52:57.651335 | orchestrator | Sunday 01 June 2025 04:50:30 +0000 (0:00:10.905) 0:00:46.186 *********** 2025-06-01 04:52:57.651344 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651353 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.651362 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.651371 | orchestrator | 2025-06-01 04:52:57.651380 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-01 04:52:57.651389 | orchestrator | Sunday 01 June 2025 04:50:31 +0000 (0:00:00.696) 0:00:46.883 *********** 2025-06-01 04:52:57.651399 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.651408 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.651417 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.651426 | orchestrator | 2025-06-01 04:52:57.651436 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-01 04:52:57.651445 | orchestrator | Sunday 01 June 2025 04:50:31 +0000 (0:00:00.444) 0:00:47.327 *********** 2025-06-01 04:52:57.651454 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.651464 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.651473 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.651482 | orchestrator | 2025-06-01 04:52:57.651491 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-01 04:52:57.651500 | orchestrator | Sunday 01 June 2025 04:50:32 +0000 (0:00:00.409) 0:00:47.736 *********** 2025-06-01 04:52:57.651508 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.651515 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.651529 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.651542 | orchestrator | 2025-06-01 04:52:57.651553 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-01 04:52:57.651566 | orchestrator | Sunday 01 June 2025 04:50:32 +0000 (0:00:00.411) 0:00:48.149 *********** 2025-06-01 04:52:57.651578 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651588 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.651599 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.651659 | orchestrator | 2025-06-01 04:52:57.651684 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-01 04:52:57.651696 | orchestrator | Sunday 01 June 2025 04:50:33 +0000 (0:00:00.682) 0:00:48.831 *********** 2025-06-01 04:52:57.651704 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.651712 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.651720 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.651727 | orchestrator | 2025-06-01 04:52:57.651735 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 04:52:57.651743 | orchestrator | Sunday 01 June 2025 04:50:33 +0000 (0:00:00.440) 0:00:49.272 *********** 2025-06-01 04:52:57.651751 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.651758 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.651766 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-01 04:52:57.651774 | orchestrator | 2025-06-01 04:52:57.651782 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-01 04:52:57.651789 | orchestrator | Sunday 01 June 2025 04:50:34 +0000 (0:00:00.412) 0:00:49.684 *********** 2025-06-01 04:52:57.651805 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.651812 | orchestrator | 2025-06-01 04:52:57.651820 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-01 04:52:57.651828 | orchestrator | Sunday 01 June 2025 04:50:44 +0000 (0:00:10.182) 0:00:59.866 *********** 2025-06-01 04:52:57.651835 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651843 | orchestrator | 2025-06-01 04:52:57.651851 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 04:52:57.651859 | orchestrator | Sunday 01 June 2025 04:50:44 +0000 (0:00:00.120) 0:00:59.987 *********** 2025-06-01 04:52:57.651871 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.651879 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.651887 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.651894 | orchestrator | 2025-06-01 04:52:57.651902 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-01 04:52:57.651910 | orchestrator | Sunday 01 June 2025 04:50:45 +0000 (0:00:01.045) 0:01:01.032 *********** 2025-06-01 04:52:57.651917 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.651925 | orchestrator | 2025-06-01 04:52:57.651933 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-01 04:52:57.651941 | orchestrator | Sunday 01 June 2025 04:50:53 +0000 (0:00:07.800) 0:01:08.833 *********** 2025-06-01 04:52:57.651948 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651956 | orchestrator | 2025-06-01 04:52:57.651964 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-01 04:52:57.651972 | orchestrator | Sunday 01 June 2025 04:50:55 +0000 (0:00:01.573) 0:01:10.407 *********** 2025-06-01 04:52:57.651980 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.651987 | orchestrator | 2025-06-01 04:52:57.651995 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-01 04:52:57.652003 | orchestrator | Sunday 01 June 2025 04:50:57 +0000 (0:00:02.522) 0:01:12.930 *********** 2025-06-01 04:52:57.652011 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.652018 | orchestrator | 2025-06-01 04:52:57.652026 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-01 04:52:57.652034 | orchestrator | Sunday 01 June 2025 04:50:57 +0000 (0:00:00.123) 0:01:13.053 *********** 2025-06-01 04:52:57.652041 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.652049 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.652057 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.652064 | orchestrator | 2025-06-01 04:52:57.652072 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-01 04:52:57.652079 | orchestrator | Sunday 01 June 2025 04:50:58 +0000 (0:00:00.529) 0:01:13.582 *********** 2025-06-01 04:52:57.652087 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.652095 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 04:52:57.652102 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:57.652113 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:57.652126 | orchestrator | 2025-06-01 04:52:57.652138 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 04:52:57.652151 | orchestrator | skipping: no hosts matched 2025-06-01 04:52:57.652164 | orchestrator | 2025-06-01 04:52:57.652177 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 04:52:57.652349 | orchestrator | 2025-06-01 04:52:57.652366 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 04:52:57.652375 | orchestrator | Sunday 01 June 2025 04:50:58 +0000 (0:00:00.329) 0:01:13.912 *********** 2025-06-01 04:52:57.652383 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:52:57.652391 | orchestrator | 2025-06-01 04:52:57.652398 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 04:52:57.652406 | orchestrator | Sunday 01 June 2025 04:51:17 +0000 (0:00:18.709) 0:01:32.621 *********** 2025-06-01 04:52:57.652426 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.652434 | orchestrator | 2025-06-01 04:52:57.652441 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 04:52:57.652449 | orchestrator | Sunday 01 June 2025 04:51:37 +0000 (0:00:20.621) 0:01:53.243 *********** 2025-06-01 04:52:57.652457 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.652465 | orchestrator | 2025-06-01 04:52:57.652473 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 04:52:57.652480 | orchestrator | 2025-06-01 04:52:57.652488 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 04:52:57.652496 | orchestrator | Sunday 01 June 2025 04:51:40 +0000 (0:00:02.580) 0:01:55.824 *********** 2025-06-01 04:52:57.652503 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:52:57.652511 | orchestrator | 2025-06-01 04:52:57.652519 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 04:52:57.652526 | orchestrator | Sunday 01 June 2025 04:52:00 +0000 (0:00:19.790) 0:02:15.615 *********** 2025-06-01 04:52:57.652534 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.652542 | orchestrator | 2025-06-01 04:52:57.652549 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 04:52:57.652557 | orchestrator | Sunday 01 June 2025 04:52:20 +0000 (0:00:20.589) 0:02:36.204 *********** 2025-06-01 04:52:57.652573 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.652581 | orchestrator | 2025-06-01 04:52:57.652589 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 04:52:57.652597 | orchestrator | 2025-06-01 04:52:57.652627 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 04:52:57.652637 | orchestrator | Sunday 01 June 2025 04:52:23 +0000 (0:00:02.772) 0:02:38.977 *********** 2025-06-01 04:52:57.652645 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.652652 | orchestrator | 2025-06-01 04:52:57.652660 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 04:52:57.652668 | orchestrator | Sunday 01 June 2025 04:52:39 +0000 (0:00:15.954) 0:02:54.931 *********** 2025-06-01 04:52:57.652675 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.652683 | orchestrator | 2025-06-01 04:52:57.652691 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 04:52:57.652699 | orchestrator | Sunday 01 June 2025 04:52:40 +0000 (0:00:00.559) 0:02:55.491 *********** 2025-06-01 04:52:57.652707 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.652715 | orchestrator | 2025-06-01 04:52:57.652722 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 04:52:57.652730 | orchestrator | 2025-06-01 04:52:57.652738 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 04:52:57.652745 | orchestrator | Sunday 01 June 2025 04:52:42 +0000 (0:00:02.448) 0:02:57.939 *********** 2025-06-01 04:52:57.652753 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:52:57.652761 | orchestrator | 2025-06-01 04:52:57.652769 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-01 04:52:57.652782 | orchestrator | Sunday 01 June 2025 04:52:43 +0000 (0:00:00.520) 0:02:58.459 *********** 2025-06-01 04:52:57.652790 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.652798 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.652806 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.652814 | orchestrator | 2025-06-01 04:52:57.652822 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-01 04:52:57.652830 | orchestrator | Sunday 01 June 2025 04:52:45 +0000 (0:00:02.357) 0:03:00.816 *********** 2025-06-01 04:52:57.652837 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.652845 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.652853 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.652861 | orchestrator | 2025-06-01 04:52:57.652869 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-01 04:52:57.652877 | orchestrator | Sunday 01 June 2025 04:52:47 +0000 (0:00:01.970) 0:03:02.787 *********** 2025-06-01 04:52:57.652890 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.652898 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.652906 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.652913 | orchestrator | 2025-06-01 04:52:57.652921 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-01 04:52:57.652929 | orchestrator | Sunday 01 June 2025 04:52:49 +0000 (0:00:02.122) 0:03:04.909 *********** 2025-06-01 04:52:57.652937 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.652944 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.652952 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:52:57.652960 | orchestrator | 2025-06-01 04:52:57.652967 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-01 04:52:57.652975 | orchestrator | Sunday 01 June 2025 04:52:51 +0000 (0:00:01.959) 0:03:06.869 *********** 2025-06-01 04:52:57.652984 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:52:57.652994 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:52:57.653003 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:52:57.653012 | orchestrator | 2025-06-01 04:52:57.653021 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 04:52:57.653030 | orchestrator | Sunday 01 June 2025 04:52:54 +0000 (0:00:03.040) 0:03:09.909 *********** 2025-06-01 04:52:57.653039 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:52:57.653049 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:52:57.653057 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:52:57.653066 | orchestrator | 2025-06-01 04:52:57.653075 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:52:57.653085 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-01 04:52:57.653095 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-01 04:52:57.653105 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-01 04:52:57.653114 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-01 04:52:57.653123 | orchestrator | 2025-06-01 04:52:57.653135 | orchestrator | 2025-06-01 04:52:57.653149 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:52:57.653161 | orchestrator | Sunday 01 June 2025 04:52:54 +0000 (0:00:00.231) 0:03:10.140 *********** 2025-06-01 04:52:57.653173 | orchestrator | =============================================================================== 2025-06-01 04:52:57.653188 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.21s 2025-06-01 04:52:57.653202 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.50s 2025-06-01 04:52:57.653216 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.95s 2025-06-01 04:52:57.653225 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2025-06-01 04:52:57.653234 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.18s 2025-06-01 04:52:57.653250 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.80s 2025-06-01 04:52:57.653259 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.35s 2025-06-01 04:52:57.653269 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.24s 2025-06-01 04:52:57.653278 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.15s 2025-06-01 04:52:57.653287 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.36s 2025-06-01 04:52:57.653296 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.31s 2025-06-01 04:52:57.653312 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.21s 2025-06-01 04:52:57.653321 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.11s 2025-06-01 04:52:57.653330 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.04s 2025-06-01 04:52:57.653339 | orchestrator | Check MariaDB service --------------------------------------------------- 2.75s 2025-06-01 04:52:57.653347 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2025-06-01 04:52:57.653355 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.45s 2025-06-01 04:52:57.653362 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.44s 2025-06-01 04:52:57.653370 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.36s 2025-06-01 04:52:57.653382 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.12s 2025-06-01 04:52:57.653390 | orchestrator | 2025-06-01 04:52:57 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:52:57.653398 | orchestrator | 2025-06-01 04:52:57 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:52:57.653406 | orchestrator | 2025-06-01 04:52:57 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:52:57.653414 | orchestrator | 2025-06-01 04:52:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:00.700151 | orchestrator | 2025-06-01 04:53:00 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:00.700854 | orchestrator | 2025-06-01 04:53:00 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:00.701275 | orchestrator | 2025-06-01 04:53:00 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:00.701319 | orchestrator | 2025-06-01 04:53:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:03.736798 | orchestrator | 2025-06-01 04:53:03 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:03.741410 | orchestrator | 2025-06-01 04:53:03 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:03.741447 | orchestrator | 2025-06-01 04:53:03 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:03.741461 | orchestrator | 2025-06-01 04:53:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:06.784436 | orchestrator | 2025-06-01 04:53:06 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:06.785965 | orchestrator | 2025-06-01 04:53:06 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:06.788172 | orchestrator | 2025-06-01 04:53:06 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:06.788202 | orchestrator | 2025-06-01 04:53:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:09.830525 | orchestrator | 2025-06-01 04:53:09 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:09.831333 | orchestrator | 2025-06-01 04:53:09 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:09.836008 | orchestrator | 2025-06-01 04:53:09 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:09.836140 | orchestrator | 2025-06-01 04:53:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:12.882285 | orchestrator | 2025-06-01 04:53:12 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:12.882411 | orchestrator | 2025-06-01 04:53:12 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:12.884062 | orchestrator | 2025-06-01 04:53:12 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:12.884150 | orchestrator | 2025-06-01 04:53:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:15.928814 | orchestrator | 2025-06-01 04:53:15 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:15.930080 | orchestrator | 2025-06-01 04:53:15 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:15.931603 | orchestrator | 2025-06-01 04:53:15 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:15.931767 | orchestrator | 2025-06-01 04:53:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:18.975932 | orchestrator | 2025-06-01 04:53:18 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:18.976320 | orchestrator | 2025-06-01 04:53:18 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:18.977276 | orchestrator | 2025-06-01 04:53:18 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:18.977303 | orchestrator | 2025-06-01 04:53:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:22.007240 | orchestrator | 2025-06-01 04:53:22 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:22.009829 | orchestrator | 2025-06-01 04:53:22 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:22.012834 | orchestrator | 2025-06-01 04:53:22 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:22.012898 | orchestrator | 2025-06-01 04:53:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:25.055600 | orchestrator | 2025-06-01 04:53:25 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:25.056786 | orchestrator | 2025-06-01 04:53:25 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:25.057723 | orchestrator | 2025-06-01 04:53:25 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:25.057751 | orchestrator | 2025-06-01 04:53:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:28.119194 | orchestrator | 2025-06-01 04:53:28 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:28.121123 | orchestrator | 2025-06-01 04:53:28 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:28.123396 | orchestrator | 2025-06-01 04:53:28 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:28.123867 | orchestrator | 2025-06-01 04:53:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:31.197331 | orchestrator | 2025-06-01 04:53:31 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:31.197450 | orchestrator | 2025-06-01 04:53:31 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:31.197466 | orchestrator | 2025-06-01 04:53:31 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:31.197478 | orchestrator | 2025-06-01 04:53:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:34.244582 | orchestrator | 2025-06-01 04:53:34 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:34.246688 | orchestrator | 2025-06-01 04:53:34 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:34.249020 | orchestrator | 2025-06-01 04:53:34 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:34.249078 | orchestrator | 2025-06-01 04:53:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:37.296795 | orchestrator | 2025-06-01 04:53:37 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:37.297308 | orchestrator | 2025-06-01 04:53:37 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:37.298362 | orchestrator | 2025-06-01 04:53:37 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:37.298482 | orchestrator | 2025-06-01 04:53:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:40.365362 | orchestrator | 2025-06-01 04:53:40 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:40.365956 | orchestrator | 2025-06-01 04:53:40 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:40.368010 | orchestrator | 2025-06-01 04:53:40 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:40.368056 | orchestrator | 2025-06-01 04:53:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:43.412711 | orchestrator | 2025-06-01 04:53:43 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:43.413596 | orchestrator | 2025-06-01 04:53:43 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:43.416410 | orchestrator | 2025-06-01 04:53:43 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:43.416464 | orchestrator | 2025-06-01 04:53:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:46.511933 | orchestrator | 2025-06-01 04:53:46 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:46.513223 | orchestrator | 2025-06-01 04:53:46 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:46.514492 | orchestrator | 2025-06-01 04:53:46 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state STARTED 2025-06-01 04:53:46.514524 | orchestrator | 2025-06-01 04:53:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:49.576327 | orchestrator | 2025-06-01 04:53:49 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:49.577570 | orchestrator | 2025-06-01 04:53:49 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:49.581805 | orchestrator | 2025-06-01 04:53:49 | INFO  | Task 1ed3bf69-d212-495d-aab1-e5dbcf7fa910 is in state SUCCESS 2025-06-01 04:53:49.585700 | orchestrator | 2025-06-01 04:53:49.585759 | orchestrator | 2025-06-01 04:53:49.585783 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-01 04:53:49.585796 | orchestrator | 2025-06-01 04:53:49.585808 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-01 04:53:49.585819 | orchestrator | Sunday 01 June 2025 04:51:41 +0000 (0:00:00.597) 0:00:00.597 *********** 2025-06-01 04:53:49.585830 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:53:49.585842 | orchestrator | 2025-06-01 04:53:49.585854 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-01 04:53:49.585865 | orchestrator | Sunday 01 June 2025 04:51:41 +0000 (0:00:00.661) 0:00:01.259 *********** 2025-06-01 04:53:49.585925 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.585939 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.585950 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.586556 | orchestrator | 2025-06-01 04:53:49.586568 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-01 04:53:49.586604 | orchestrator | Sunday 01 June 2025 04:51:42 +0000 (0:00:00.590) 0:00:01.849 *********** 2025-06-01 04:53:49.586616 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.586627 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.586638 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.586648 | orchestrator | 2025-06-01 04:53:49.586699 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-01 04:53:49.586713 | orchestrator | Sunday 01 June 2025 04:51:42 +0000 (0:00:00.283) 0:00:02.133 *********** 2025-06-01 04:53:49.586724 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.586734 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.586745 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.586756 | orchestrator | 2025-06-01 04:53:49.586767 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-01 04:53:49.586778 | orchestrator | Sunday 01 June 2025 04:51:43 +0000 (0:00:00.795) 0:00:02.929 *********** 2025-06-01 04:53:49.586788 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.586799 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.586809 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.586820 | orchestrator | 2025-06-01 04:53:49.586831 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-01 04:53:49.586841 | orchestrator | Sunday 01 June 2025 04:51:43 +0000 (0:00:00.333) 0:00:03.262 *********** 2025-06-01 04:53:49.586852 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.586863 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.586873 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.586903 | orchestrator | 2025-06-01 04:53:49.586914 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-01 04:53:49.586925 | orchestrator | Sunday 01 June 2025 04:51:44 +0000 (0:00:00.337) 0:00:03.599 *********** 2025-06-01 04:53:49.586936 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.586947 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.586969 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.586980 | orchestrator | 2025-06-01 04:53:49.586991 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-01 04:53:49.587002 | orchestrator | Sunday 01 June 2025 04:51:44 +0000 (0:00:00.320) 0:00:03.920 *********** 2025-06-01 04:53:49.587013 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.587025 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.587036 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.587046 | orchestrator | 2025-06-01 04:53:49.587057 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-01 04:53:49.587068 | orchestrator | Sunday 01 June 2025 04:51:44 +0000 (0:00:00.532) 0:00:04.453 *********** 2025-06-01 04:53:49.587079 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.587089 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.587100 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.587111 | orchestrator | 2025-06-01 04:53:49.587121 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-01 04:53:49.587132 | orchestrator | Sunday 01 June 2025 04:51:45 +0000 (0:00:00.305) 0:00:04.758 *********** 2025-06-01 04:53:49.587145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 04:53:49.587157 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:53:49.587170 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:53:49.587182 | orchestrator | 2025-06-01 04:53:49.587195 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-01 04:53:49.587207 | orchestrator | Sunday 01 June 2025 04:51:45 +0000 (0:00:00.670) 0:00:05.428 *********** 2025-06-01 04:53:49.587219 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.587232 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.587245 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.587257 | orchestrator | 2025-06-01 04:53:49.587270 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-01 04:53:49.587291 | orchestrator | Sunday 01 June 2025 04:51:46 +0000 (0:00:00.457) 0:00:05.886 *********** 2025-06-01 04:53:49.587304 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 04:53:49.587317 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:53:49.587330 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:53:49.587342 | orchestrator | 2025-06-01 04:53:49.587354 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-01 04:53:49.587367 | orchestrator | Sunday 01 June 2025 04:51:48 +0000 (0:00:02.191) 0:00:08.077 *********** 2025-06-01 04:53:49.587379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 04:53:49.587391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 04:53:49.587404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 04:53:49.587417 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.587430 | orchestrator | 2025-06-01 04:53:49.587443 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-01 04:53:49.587503 | orchestrator | Sunday 01 June 2025 04:51:49 +0000 (0:00:00.453) 0:00:08.531 *********** 2025-06-01 04:53:49.587527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.587543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.587554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.587565 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.587576 | orchestrator | 2025-06-01 04:53:49.587587 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-01 04:53:49.587597 | orchestrator | Sunday 01 June 2025 04:51:49 +0000 (0:00:00.799) 0:00:09.330 *********** 2025-06-01 04:53:49.587610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.587625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.587636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.587647 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.587719 | orchestrator | 2025-06-01 04:53:49.587733 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-01 04:53:49.587745 | orchestrator | Sunday 01 June 2025 04:51:50 +0000 (0:00:00.170) 0:00:09.500 *********** 2025-06-01 04:53:49.587767 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e13b5cc6903b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-01 04:51:47.080228', 'end': '2025-06-01 04:51:47.130185', 'delta': '0:00:00.049957', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e13b5cc6903b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-01 04:53:49.587782 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5f785eefa7ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-01 04:51:47.851411', 'end': '2025-06-01 04:51:47.891336', 'delta': '0:00:00.039925', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f785eefa7ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-01 04:53:49.587838 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a6c35d1f2602', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-01 04:51:48.406170', 'end': '2025-06-01 04:51:48.455607', 'delta': '0:00:00.049437', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6c35d1f2602'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-01 04:53:49.587852 | orchestrator | 2025-06-01 04:53:49.587863 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-01 04:53:49.587875 | orchestrator | Sunday 01 June 2025 04:51:50 +0000 (0:00:00.413) 0:00:09.914 *********** 2025-06-01 04:53:49.587885 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.587896 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.587907 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.587917 | orchestrator | 2025-06-01 04:53:49.587928 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-01 04:53:49.587939 | orchestrator | Sunday 01 June 2025 04:51:50 +0000 (0:00:00.425) 0:00:10.340 *********** 2025-06-01 04:53:49.587949 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-01 04:53:49.587960 | orchestrator | 2025-06-01 04:53:49.587971 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-01 04:53:49.587982 | orchestrator | Sunday 01 June 2025 04:51:52 +0000 (0:00:01.758) 0:00:12.098 *********** 2025-06-01 04:53:49.587992 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588003 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588014 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588025 | orchestrator | 2025-06-01 04:53:49.588035 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-01 04:53:49.588046 | orchestrator | Sunday 01 June 2025 04:51:52 +0000 (0:00:00.290) 0:00:12.389 *********** 2025-06-01 04:53:49.588056 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588067 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588078 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588096 | orchestrator | 2025-06-01 04:53:49.588107 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 04:53:49.588118 | orchestrator | Sunday 01 June 2025 04:51:53 +0000 (0:00:00.435) 0:00:12.824 *********** 2025-06-01 04:53:49.588128 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588139 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588150 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588161 | orchestrator | 2025-06-01 04:53:49.588171 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-01 04:53:49.588182 | orchestrator | Sunday 01 June 2025 04:51:53 +0000 (0:00:00.535) 0:00:13.359 *********** 2025-06-01 04:53:49.588191 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.588201 | orchestrator | 2025-06-01 04:53:49.588210 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-01 04:53:49.588220 | orchestrator | Sunday 01 June 2025 04:51:54 +0000 (0:00:00.137) 0:00:13.497 *********** 2025-06-01 04:53:49.588229 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588238 | orchestrator | 2025-06-01 04:53:49.588248 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 04:53:49.588257 | orchestrator | Sunday 01 June 2025 04:51:54 +0000 (0:00:00.222) 0:00:13.719 *********** 2025-06-01 04:53:49.588267 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588276 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588286 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588295 | orchestrator | 2025-06-01 04:53:49.588305 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-01 04:53:49.588314 | orchestrator | Sunday 01 June 2025 04:51:54 +0000 (0:00:00.264) 0:00:13.984 *********** 2025-06-01 04:53:49.588324 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588333 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588343 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588353 | orchestrator | 2025-06-01 04:53:49.588362 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-01 04:53:49.588372 | orchestrator | Sunday 01 June 2025 04:51:54 +0000 (0:00:00.322) 0:00:14.307 *********** 2025-06-01 04:53:49.588381 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588390 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588400 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588409 | orchestrator | 2025-06-01 04:53:49.588419 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-01 04:53:49.588428 | orchestrator | Sunday 01 June 2025 04:51:55 +0000 (0:00:00.555) 0:00:14.862 *********** 2025-06-01 04:53:49.588438 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588447 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588457 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588466 | orchestrator | 2025-06-01 04:53:49.588475 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-01 04:53:49.588485 | orchestrator | Sunday 01 June 2025 04:51:55 +0000 (0:00:00.332) 0:00:15.195 *********** 2025-06-01 04:53:49.588494 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588504 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588513 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588523 | orchestrator | 2025-06-01 04:53:49.588532 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-01 04:53:49.588542 | orchestrator | Sunday 01 June 2025 04:51:56 +0000 (0:00:00.311) 0:00:15.506 *********** 2025-06-01 04:53:49.588551 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588560 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588570 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588579 | orchestrator | 2025-06-01 04:53:49.588589 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-01 04:53:49.588636 | orchestrator | Sunday 01 June 2025 04:51:56 +0000 (0:00:00.302) 0:00:15.809 *********** 2025-06-01 04:53:49.588648 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.588690 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.588709 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.588727 | orchestrator | 2025-06-01 04:53:49.588744 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-01 04:53:49.588760 | orchestrator | Sunday 01 June 2025 04:51:56 +0000 (0:00:00.476) 0:00:16.285 *********** 2025-06-01 04:53:49.588771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01', 'dm-uuid-LVM-1eUOzdbAnujbrmmQbf1u8TWwCKKehc4EsW3O8lHP2AY4FoheEDAi3yxRewteMMBh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854', 'dm-uuid-LVM-jvbLPog2454BR2VqTPTDTQuqD0m7XmJHNq8L9Bml09d5fS7mp2MKgWxLY5pba4oZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.588937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c', 'dm-uuid-LVM-PRLwnxcVzIsP7Q3HfzFKwdTPz1uGc6nycVh0jSEwLU2kbU5DsKCWhKIa7fzmgY4T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.588980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SfdbD4-DQeU-upZX-fFei-KrR8-spZ2-2tSadc', 'scsi-0QEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85', 'scsi-SQEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5Okb9-7wiI-AUzs-6xEc-WeRK-3xcZ-hI4vGp', 'scsi-0QEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087', 'scsi-SQEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be', 'dm-uuid-LVM-AqU225ITWkMhxioP4SNN3vtZuUgxHr2CFmlfDeotkO8E502IVpeU2uNXBPoSaqMR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9', 'scsi-SQEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589131 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.589141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f', 'dm-uuid-LVM-ScHrvNPr8qDyCeO4x5OiVfWTfDnUmC7SHZYBYTkTtP6D42HpChnXEPORdGms420C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-r4SpB9-BCLC-eYHP-lMrq-wCSy-3vhG-ZRqCC7', 'scsi-0QEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c', 'scsi-SQEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9', 'dm-uuid-LVM-h6G5GzXBE45l6hxKniWXpOW1h9rmmErUiA7TRJwQlqicY2yDsAM0il518CF0D2fU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DLl1Fq-KyrV-vfYI-RyK1-3lga-eE7q-zypSS7', 'scsi-0QEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79', 'scsi-SQEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110', 'scsi-SQEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589328 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.589338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 04:53:49.589406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-73KRxk-M406-MiXW-jgpk-jXkk-l5hx-WvE3Ux', 'scsi-0QEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af', 'scsi-SQEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4EBCt5-xfUc-O52C-4B6h-6o6d-D1FV-ne9RND', 'scsi-0QEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c', 'scsi-SQEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2', 'scsi-SQEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 04:53:49.589506 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.589516 | orchestrator | 2025-06-01 04:53:49.589526 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-01 04:53:49.589536 | orchestrator | Sunday 01 June 2025 04:51:57 +0000 (0:00:00.518) 0:00:16.803 *********** 2025-06-01 04:53:49.589642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01', 'dm-uuid-LVM-1eUOzdbAnujbrmmQbf1u8TWwCKKehc4EsW3O8lHP2AY4FoheEDAi3yxRewteMMBh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854', 'dm-uuid-LVM-jvbLPog2454BR2VqTPTDTQuqD0m7XmJHNq8L9Bml09d5fS7mp2MKgWxLY5pba4oZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c', 'dm-uuid-LVM-PRLwnxcVzIsP7Q3HfzFKwdTPz1uGc6nycVh0jSEwLU2kbU5DsKCWhKIa7fzmgY4T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be', 'dm-uuid-LVM-AqU225ITWkMhxioP4SNN3vtZuUgxHr2CFmlfDeotkO8E502IVpeU2uNXBPoSaqMR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9720f4a6-d2e6-4f67-b6f6-fba741bae89b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589876 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--24633ad7--3e48--5d36--bc1c--15adae99ed01-osd--block--24633ad7--3e48--5d36--bc1c--15adae99ed01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SfdbD4-DQeU-upZX-fFei-KrR8-spZ2-2tSadc', 'scsi-0QEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85', 'scsi-SQEMU_QEMU_HARDDISK_52cdef25-f5ea-459b-a3d2-6dc79872de85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2a6257e3--2619--5e00--b9d8--6074ce245854-osd--block--2a6257e3--2619--5e00--b9d8--6074ce245854'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5Okb9-7wiI-AUzs-6xEc-WeRK-3xcZ-hI4vGp', 'scsi-0QEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087', 'scsi-SQEMU_QEMU_HARDDISK_5b466634-774d-43fb-b203-3068f5674087'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9', 'scsi-SQEMU_QEMU_HARDDISK_eda6ceaf-d5f2-4ee0-987b-2c5c3a488ff9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589965 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.589995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590011 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.590056 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16', 'scsi-SQEMU_QEMU_HARDDISK_60eef7c2-e85a-474a-b822-4cdf08490182-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--baa7c707--8012--580f--8c9e--09def35a523c-osd--block--baa7c707--8012--580f--8c9e--09def35a523c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-r4SpB9-BCLC-eYHP-lMrq-wCSy-3vhG-ZRqCC7', 'scsi-0QEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c', 'scsi-SQEMU_QEMU_HARDDISK_13757f92-d131-4fb2-97b0-30fa6d4a703c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c1f9d798--cc3d--57c0--9350--8228d94606be-osd--block--c1f9d798--cc3d--57c0--9350--8228d94606be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DLl1Fq-KyrV-vfYI-RyK1-3lga-eE7q-zypSS7', 'scsi-0QEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79', 'scsi-SQEMU_QEMU_HARDDISK_f8222133-3d15-437e-b81b-973910c5fe79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f', 'dm-uuid-LVM-ScHrvNPr8qDyCeO4x5OiVfWTfDnUmC7SHZYBYTkTtP6D42HpChnXEPORdGms420C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110', 'scsi-SQEMU_QEMU_HARDDISK_1fa93f47-9163-4651-815b-24671ddef110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590154 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9', 'dm-uuid-LVM-h6G5GzXBE45l6hxKniWXpOW1h9rmmErUiA7TRJwQlqicY2yDsAM0il518CF0D2fU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590179 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.590190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590233 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590254 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16', 'scsi-SQEMU_QEMU_HARDDISK_87bf084a-b980-43ed-ba9b-a8dc90a62403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590313 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f-osd--block--a7ddc8d9--d495--524c--b0f4--e7d8a8d73f0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-73KRxk-M406-MiXW-jgpk-jXkk-l5hx-WvE3Ux', 'scsi-0QEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af', 'scsi-SQEMU_QEMU_HARDDISK_48a1c260-3052-4e59-9db5-94630d6736af'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590329 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--308e0632--b76f--5a8e--af6f--04e4a02ef5a9-osd--block--308e0632--b76f--5a8e--af6f--04e4a02ef5a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4EBCt5-xfUc-O52C-4B6h-6o6d-D1FV-ne9RND', 'scsi-0QEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c', 'scsi-SQEMU_QEMU_HARDDISK_2bf032b4-821f-4153-a16b-c7c7b9690c3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2', 'scsi-SQEMU_QEMU_HARDDISK_88d52e43-2c9d-46e0-bf5e-2238e33d97a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-03-02-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 04:53:49.590370 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.590380 | orchestrator | 2025-06-01 04:53:49.590390 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-01 04:53:49.590400 | orchestrator | Sunday 01 June 2025 04:51:57 +0000 (0:00:00.576) 0:00:17.380 *********** 2025-06-01 04:53:49.590410 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.590420 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.590429 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.590439 | orchestrator | 2025-06-01 04:53:49.590448 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-01 04:53:49.590458 | orchestrator | Sunday 01 June 2025 04:51:58 +0000 (0:00:00.691) 0:00:18.071 *********** 2025-06-01 04:53:49.590468 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.590477 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.590487 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.590503 | orchestrator | 2025-06-01 04:53:49.590513 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 04:53:49.590523 | orchestrator | Sunday 01 June 2025 04:51:59 +0000 (0:00:00.457) 0:00:18.528 *********** 2025-06-01 04:53:49.590532 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.590542 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.590551 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.590561 | orchestrator | 2025-06-01 04:53:49.590572 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 04:53:49.590583 | orchestrator | Sunday 01 June 2025 04:51:59 +0000 (0:00:00.697) 0:00:19.226 *********** 2025-06-01 04:53:49.590593 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.590602 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.590612 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.590621 | orchestrator | 2025-06-01 04:53:49.590631 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 04:53:49.590641 | orchestrator | Sunday 01 June 2025 04:52:00 +0000 (0:00:00.292) 0:00:19.518 *********** 2025-06-01 04:53:49.590650 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.590716 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.590728 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.590738 | orchestrator | 2025-06-01 04:53:49.590748 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 04:53:49.590757 | orchestrator | Sunday 01 June 2025 04:52:00 +0000 (0:00:00.465) 0:00:19.983 *********** 2025-06-01 04:53:49.590767 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.590777 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.590786 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.590796 | orchestrator | 2025-06-01 04:53:49.590806 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-01 04:53:49.590815 | orchestrator | Sunday 01 June 2025 04:52:00 +0000 (0:00:00.481) 0:00:20.465 *********** 2025-06-01 04:53:49.590825 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-01 04:53:49.590835 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-01 04:53:49.590845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-01 04:53:49.590854 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-01 04:53:49.590862 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-01 04:53:49.590870 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-01 04:53:49.590878 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-01 04:53:49.590886 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-01 04:53:49.590894 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-01 04:53:49.590902 | orchestrator | 2025-06-01 04:53:49.590910 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-01 04:53:49.590918 | orchestrator | Sunday 01 June 2025 04:52:01 +0000 (0:00:00.794) 0:00:21.259 *********** 2025-06-01 04:53:49.590926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 04:53:49.590934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 04:53:49.590942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 04:53:49.590950 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.590958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 04:53:49.590966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 04:53:49.590974 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 04:53:49.590982 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.590990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 04:53:49.590998 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 04:53:49.591007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 04:53:49.591015 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.591029 | orchestrator | 2025-06-01 04:53:49.591037 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-01 04:53:49.591045 | orchestrator | Sunday 01 June 2025 04:52:02 +0000 (0:00:00.319) 0:00:21.578 *********** 2025-06-01 04:53:49.591053 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:53:49.591062 | orchestrator | 2025-06-01 04:53:49.591070 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 04:53:49.591078 | orchestrator | Sunday 01 June 2025 04:52:02 +0000 (0:00:00.695) 0:00:22.274 *********** 2025-06-01 04:53:49.591086 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591094 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.591102 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.591110 | orchestrator | 2025-06-01 04:53:49.591127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 04:53:49.591136 | orchestrator | Sunday 01 June 2025 04:52:03 +0000 (0:00:00.312) 0:00:22.586 *********** 2025-06-01 04:53:49.591143 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591151 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.591159 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.591167 | orchestrator | 2025-06-01 04:53:49.591175 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 04:53:49.591183 | orchestrator | Sunday 01 June 2025 04:52:03 +0000 (0:00:00.294) 0:00:22.880 *********** 2025-06-01 04:53:49.591190 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591198 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.591206 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:53:49.591290 | orchestrator | 2025-06-01 04:53:49.591299 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 04:53:49.591307 | orchestrator | Sunday 01 June 2025 04:52:03 +0000 (0:00:00.319) 0:00:23.200 *********** 2025-06-01 04:53:49.591315 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.591322 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.591330 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.591338 | orchestrator | 2025-06-01 04:53:49.591346 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 04:53:49.591353 | orchestrator | Sunday 01 June 2025 04:52:04 +0000 (0:00:00.576) 0:00:23.776 *********** 2025-06-01 04:53:49.591361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:53:49.591369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:53:49.591377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:53:49.591385 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591393 | orchestrator | 2025-06-01 04:53:49.591401 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 04:53:49.591408 | orchestrator | Sunday 01 June 2025 04:52:04 +0000 (0:00:00.349) 0:00:24.126 *********** 2025-06-01 04:53:49.591417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:53:49.591425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:53:49.591432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:53:49.591440 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591448 | orchestrator | 2025-06-01 04:53:49.591456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 04:53:49.591463 | orchestrator | Sunday 01 June 2025 04:52:04 +0000 (0:00:00.351) 0:00:24.477 *********** 2025-06-01 04:53:49.591471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 04:53:49.591479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 04:53:49.591487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 04:53:49.591495 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591503 | orchestrator | 2025-06-01 04:53:49.591519 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 04:53:49.591527 | orchestrator | Sunday 01 June 2025 04:52:05 +0000 (0:00:00.337) 0:00:24.815 *********** 2025-06-01 04:53:49.591534 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:53:49.591542 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:53:49.591550 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:53:49.591596 | orchestrator | 2025-06-01 04:53:49.591605 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 04:53:49.591612 | orchestrator | Sunday 01 June 2025 04:52:05 +0000 (0:00:00.350) 0:00:25.166 *********** 2025-06-01 04:53:49.591620 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 04:53:49.591628 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 04:53:49.591636 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 04:53:49.591644 | orchestrator | 2025-06-01 04:53:49.591651 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-01 04:53:49.591677 | orchestrator | Sunday 01 June 2025 04:52:06 +0000 (0:00:00.505) 0:00:25.671 *********** 2025-06-01 04:53:49.591686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 04:53:49.591695 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:53:49.591702 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:53:49.591710 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-01 04:53:49.591718 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 04:53:49.591726 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 04:53:49.591774 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 04:53:49.591783 | orchestrator | 2025-06-01 04:53:49.591790 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-01 04:53:49.591798 | orchestrator | Sunday 01 June 2025 04:52:07 +0000 (0:00:00.930) 0:00:26.602 *********** 2025-06-01 04:53:49.591806 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 04:53:49.591814 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 04:53:49.591821 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 04:53:49.591829 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-01 04:53:49.591837 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 04:53:49.591845 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 04:53:49.591879 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 04:53:49.591889 | orchestrator | 2025-06-01 04:53:49.591908 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-01 04:53:49.591917 | orchestrator | Sunday 01 June 2025 04:52:09 +0000 (0:00:01.890) 0:00:28.492 *********** 2025-06-01 04:53:49.591925 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:53:49.591933 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:53:49.591940 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-01 04:53:49.591948 | orchestrator | 2025-06-01 04:53:49.591956 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-01 04:53:49.591964 | orchestrator | Sunday 01 June 2025 04:52:09 +0000 (0:00:00.401) 0:00:28.894 *********** 2025-06-01 04:53:49.591972 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:53:49.591981 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:53:49.591996 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:53:49.592004 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:53:49.592012 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 04:53:49.592020 | orchestrator | 2025-06-01 04:53:49.592028 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-01 04:53:49.592036 | orchestrator | Sunday 01 June 2025 04:52:55 +0000 (0:00:45.713) 0:01:14.607 *********** 2025-06-01 04:53:49.592043 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592051 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592059 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592067 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592082 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592090 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-01 04:53:49.592098 | orchestrator | 2025-06-01 04:53:49.592106 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-01 04:53:49.592113 | orchestrator | Sunday 01 June 2025 04:53:17 +0000 (0:00:22.808) 0:01:37.416 *********** 2025-06-01 04:53:49.592121 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592129 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592136 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592144 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592152 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592159 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592167 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 04:53:49.592175 | orchestrator | 2025-06-01 04:53:49.592182 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-01 04:53:49.592190 | orchestrator | Sunday 01 June 2025 04:53:29 +0000 (0:00:11.610) 0:01:49.027 *********** 2025-06-01 04:53:49.592198 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592206 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:53:49.592214 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:53:49.592222 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592229 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:53:49.592242 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:53:49.592258 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592267 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:53:49.592274 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:53:49.592282 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592290 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:53:49.592298 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:53:49.592306 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592336 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:53:49.592345 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:53:49.592353 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 04:53:49.592361 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 04:53:49.592369 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 04:53:49.592392 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-01 04:53:49.592400 | orchestrator | 2025-06-01 04:53:49.592408 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:53:49.592416 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-01 04:53:49.592426 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-01 04:53:49.592434 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 04:53:49.592441 | orchestrator | 2025-06-01 04:53:49.592449 | orchestrator | 2025-06-01 04:53:49.592457 | orchestrator | 2025-06-01 04:53:49.592465 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:53:49.592473 | orchestrator | Sunday 01 June 2025 04:53:47 +0000 (0:00:17.710) 0:02:06.737 *********** 2025-06-01 04:53:49.592481 | orchestrator | =============================================================================== 2025-06-01 04:53:49.592488 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.71s 2025-06-01 04:53:49.592496 | orchestrator | generate keys ---------------------------------------------------------- 22.81s 2025-06-01 04:53:49.592504 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.71s 2025-06-01 04:53:49.592512 | orchestrator | get keys from monitors ------------------------------------------------- 11.61s 2025-06-01 04:53:49.592519 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.19s 2025-06-01 04:53:49.592527 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.89s 2025-06-01 04:53:49.592534 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2025-06-01 04:53:49.592542 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.93s 2025-06-01 04:53:49.592550 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-06-01 04:53:49.592558 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-06-01 04:53:49.592565 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.79s 2025-06-01 04:53:49.592573 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2025-06-01 04:53:49.592581 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-06-01 04:53:49.592595 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-06-01 04:53:49.592603 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-06-01 04:53:49.592611 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2025-06-01 04:53:49.592627 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.59s 2025-06-01 04:53:49.592636 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-06-01 04:53:49.592643 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-06-01 04:53:49.592651 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.56s 2025-06-01 04:53:49.592679 | orchestrator | 2025-06-01 04:53:49 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:53:49.592688 | orchestrator | 2025-06-01 04:53:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:52.641714 | orchestrator | 2025-06-01 04:53:52 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:52.643791 | orchestrator | 2025-06-01 04:53:52 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:52.646392 | orchestrator | 2025-06-01 04:53:52 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:53:52.646460 | orchestrator | 2025-06-01 04:53:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:55.692976 | orchestrator | 2025-06-01 04:53:55 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:55.693942 | orchestrator | 2025-06-01 04:53:55 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:55.696015 | orchestrator | 2025-06-01 04:53:55 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:53:55.696094 | orchestrator | 2025-06-01 04:53:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:53:58.754972 | orchestrator | 2025-06-01 04:53:58 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:53:58.755983 | orchestrator | 2025-06-01 04:53:58 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:53:58.758960 | orchestrator | 2025-06-01 04:53:58 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:53:58.759065 | orchestrator | 2025-06-01 04:53:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:01.814333 | orchestrator | 2025-06-01 04:54:01 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:01.816127 | orchestrator | 2025-06-01 04:54:01 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:01.817776 | orchestrator | 2025-06-01 04:54:01 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:54:01.817828 | orchestrator | 2025-06-01 04:54:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:04.864226 | orchestrator | 2025-06-01 04:54:04 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:04.866574 | orchestrator | 2025-06-01 04:54:04 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:04.869626 | orchestrator | 2025-06-01 04:54:04 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:54:04.869751 | orchestrator | 2025-06-01 04:54:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:07.927538 | orchestrator | 2025-06-01 04:54:07 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:07.928997 | orchestrator | 2025-06-01 04:54:07 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:07.930669 | orchestrator | 2025-06-01 04:54:07 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:54:07.930733 | orchestrator | 2025-06-01 04:54:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:10.986418 | orchestrator | 2025-06-01 04:54:10 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:10.986845 | orchestrator | 2025-06-01 04:54:10 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:10.988458 | orchestrator | 2025-06-01 04:54:10 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:54:10.988477 | orchestrator | 2025-06-01 04:54:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:14.032907 | orchestrator | 2025-06-01 04:54:14 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:14.033764 | orchestrator | 2025-06-01 04:54:14 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:14.033796 | orchestrator | 2025-06-01 04:54:14 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:54:14.033807 | orchestrator | 2025-06-01 04:54:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:17.088083 | orchestrator | 2025-06-01 04:54:17 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:17.089853 | orchestrator | 2025-06-01 04:54:17 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:17.091580 | orchestrator | 2025-06-01 04:54:17 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state STARTED 2025-06-01 04:54:17.091607 | orchestrator | 2025-06-01 04:54:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:20.145344 | orchestrator | 2025-06-01 04:54:20 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:20.149083 | orchestrator | 2025-06-01 04:54:20 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:20.152007 | orchestrator | 2025-06-01 04:54:20 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:20.154612 | orchestrator | 2025-06-01 04:54:20 | INFO  | Task 0d95588d-7b23-4637-a885-65256d44ab6c is in state SUCCESS 2025-06-01 04:54:20.154913 | orchestrator | 2025-06-01 04:54:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:23.208042 | orchestrator | 2025-06-01 04:54:23 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:23.209482 | orchestrator | 2025-06-01 04:54:23 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:23.211717 | orchestrator | 2025-06-01 04:54:23 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:23.211821 | orchestrator | 2025-06-01 04:54:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:26.250758 | orchestrator | 2025-06-01 04:54:26 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:26.252443 | orchestrator | 2025-06-01 04:54:26 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:26.255400 | orchestrator | 2025-06-01 04:54:26 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:26.256020 | orchestrator | 2025-06-01 04:54:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:29.295143 | orchestrator | 2025-06-01 04:54:29 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:29.297292 | orchestrator | 2025-06-01 04:54:29 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:29.299909 | orchestrator | 2025-06-01 04:54:29 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:29.299971 | orchestrator | 2025-06-01 04:54:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:32.353465 | orchestrator | 2025-06-01 04:54:32 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:32.354439 | orchestrator | 2025-06-01 04:54:32 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:32.355902 | orchestrator | 2025-06-01 04:54:32 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:32.355928 | orchestrator | 2025-06-01 04:54:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:35.402125 | orchestrator | 2025-06-01 04:54:35 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:35.403160 | orchestrator | 2025-06-01 04:54:35 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:35.405142 | orchestrator | 2025-06-01 04:54:35 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:35.405317 | orchestrator | 2025-06-01 04:54:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:38.469752 | orchestrator | 2025-06-01 04:54:38 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state STARTED 2025-06-01 04:54:38.471968 | orchestrator | 2025-06-01 04:54:38 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:38.475904 | orchestrator | 2025-06-01 04:54:38 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:38.475956 | orchestrator | 2025-06-01 04:54:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:41.516918 | orchestrator | 2025-06-01 04:54:41 | INFO  | Task 7d682574-d82c-4c83-90a9-06784d7cd534 is in state SUCCESS 2025-06-01 04:54:41.518509 | orchestrator | 2025-06-01 04:54:41.518567 | orchestrator | 2025-06-01 04:54:41.518587 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-01 04:54:41.518605 | orchestrator | 2025-06-01 04:54:41.518622 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-01 04:54:41.518639 | orchestrator | Sunday 01 June 2025 04:53:52 +0000 (0:00:00.178) 0:00:00.178 *********** 2025-06-01 04:54:41.518657 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-01 04:54:41.518671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.518681 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.518691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 04:54:41.518701 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.518929 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-01 04:54:41.518952 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-01 04:54:41.518970 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-01 04:54:41.518998 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-01 04:54:41.519008 | orchestrator | 2025-06-01 04:54:41.519018 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-01 04:54:41.519029 | orchestrator | Sunday 01 June 2025 04:53:56 +0000 (0:00:04.124) 0:00:04.302 *********** 2025-06-01 04:54:41.519358 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 04:54:41.519375 | orchestrator | 2025-06-01 04:54:41.519387 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-01 04:54:41.519399 | orchestrator | Sunday 01 June 2025 04:53:57 +0000 (0:00:00.957) 0:00:05.259 *********** 2025-06-01 04:54:41.519410 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-01 04:54:41.519421 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.519433 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.519443 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 04:54:41.519452 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.519462 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-01 04:54:41.519472 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-01 04:54:41.519481 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-01 04:54:41.519490 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-01 04:54:41.519500 | orchestrator | 2025-06-01 04:54:41.519510 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-01 04:54:41.519519 | orchestrator | Sunday 01 June 2025 04:54:10 +0000 (0:00:13.137) 0:00:18.396 *********** 2025-06-01 04:54:41.519530 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-01 04:54:41.519539 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.519549 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.519558 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 04:54:41.519568 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 04:54:41.519577 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-01 04:54:41.519587 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-01 04:54:41.519597 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-01 04:54:41.519606 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-01 04:54:41.519616 | orchestrator | 2025-06-01 04:54:41.519625 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:54:41.519635 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:54:41.519646 | orchestrator | 2025-06-01 04:54:41.519655 | orchestrator | 2025-06-01 04:54:41.519665 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:54:41.519675 | orchestrator | Sunday 01 June 2025 04:54:16 +0000 (0:00:06.659) 0:00:25.056 *********** 2025-06-01 04:54:41.519685 | orchestrator | =============================================================================== 2025-06-01 04:54:41.519695 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.14s 2025-06-01 04:54:41.519732 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.66s 2025-06-01 04:54:41.519750 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.12s 2025-06-01 04:54:41.519765 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2025-06-01 04:54:41.519781 | orchestrator | 2025-06-01 04:54:41.519798 | orchestrator | 2025-06-01 04:54:41.519816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:54:41.519833 | orchestrator | 2025-06-01 04:54:41.519865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:54:41.519890 | orchestrator | Sunday 01 June 2025 04:52:59 +0000 (0:00:00.221) 0:00:00.221 *********** 2025-06-01 04:54:41.519899 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.519909 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.519919 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.519928 | orchestrator | 2025-06-01 04:54:41.519938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:54:41.519948 | orchestrator | Sunday 01 June 2025 04:52:59 +0000 (0:00:00.239) 0:00:00.461 *********** 2025-06-01 04:54:41.519957 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-01 04:54:41.519967 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-01 04:54:41.519977 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-01 04:54:41.519986 | orchestrator | 2025-06-01 04:54:41.519996 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-01 04:54:41.520005 | orchestrator | 2025-06-01 04:54:41.520015 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 04:54:41.520024 | orchestrator | Sunday 01 June 2025 04:52:59 +0000 (0:00:00.347) 0:00:00.808 *********** 2025-06-01 04:54:41.520034 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:54:41.520043 | orchestrator | 2025-06-01 04:54:41.520053 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-01 04:54:41.520062 | orchestrator | Sunday 01 June 2025 04:53:00 +0000 (0:00:00.442) 0:00:01.250 *********** 2025-06-01 04:54:41.520085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.520121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.520141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.520152 | orchestrator | 2025-06-01 04:54:41.520168 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-01 04:54:41.520178 | orchestrator | Sunday 01 June 2025 04:53:01 +0000 (0:00:00.983) 0:00:02.234 *********** 2025-06-01 04:54:41.520188 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.520198 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.520207 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.520217 | orchestrator | 2025-06-01 04:54:41.520226 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 04:54:41.520248 | orchestrator | Sunday 01 June 2025 04:53:01 +0000 (0:00:00.367) 0:00:02.602 *********** 2025-06-01 04:54:41.520258 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 04:54:41.520268 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 04:54:41.520283 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 04:54:41.520293 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 04:54:41.520303 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 04:54:41.520312 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 04:54:41.520322 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-01 04:54:41.520332 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 04:54:41.520341 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 04:54:41.520351 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 04:54:41.520500 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 04:54:41.520511 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 04:54:41.520521 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 04:54:41.520531 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 04:54:41.520540 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-01 04:54:41.520556 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 04:54:41.520566 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 04:54:41.520575 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 04:54:41.520585 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 04:54:41.520594 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 04:54:41.520604 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 04:54:41.520614 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 04:54:41.520623 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-01 04:54:41.520633 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 04:54:41.520644 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-01 04:54:41.520655 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-01 04:54:41.520665 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-01 04:54:41.520675 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-01 04:54:41.520692 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-01 04:54:41.520702 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-01 04:54:41.520738 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-01 04:54:41.520748 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-01 04:54:41.520757 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-01 04:54:41.520767 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-01 04:54:41.520777 | orchestrator | 2025-06-01 04:54:41.520787 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.520796 | orchestrator | Sunday 01 June 2025 04:53:02 +0000 (0:00:00.672) 0:00:03.275 *********** 2025-06-01 04:54:41.520806 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.520816 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.520825 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.520835 | orchestrator | 2025-06-01 04:54:41.520845 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.520854 | orchestrator | Sunday 01 June 2025 04:53:02 +0000 (0:00:00.263) 0:00:03.538 *********** 2025-06-01 04:54:41.520864 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.520874 | orchestrator | 2025-06-01 04:54:41.520884 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.520901 | orchestrator | Sunday 01 June 2025 04:53:02 +0000 (0:00:00.138) 0:00:03.676 *********** 2025-06-01 04:54:41.520911 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.520921 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.520930 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.520940 | orchestrator | 2025-06-01 04:54:41.520950 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.520959 | orchestrator | Sunday 01 June 2025 04:53:02 +0000 (0:00:00.399) 0:00:04.076 *********** 2025-06-01 04:54:41.520969 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.520979 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.520988 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.520998 | orchestrator | 2025-06-01 04:54:41.521007 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.521017 | orchestrator | Sunday 01 June 2025 04:53:03 +0000 (0:00:00.272) 0:00:04.349 *********** 2025-06-01 04:54:41.521026 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521036 | orchestrator | 2025-06-01 04:54:41.521046 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.521055 | orchestrator | Sunday 01 June 2025 04:53:03 +0000 (0:00:00.136) 0:00:04.485 *********** 2025-06-01 04:54:41.521065 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521075 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.521084 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.521094 | orchestrator | 2025-06-01 04:54:41.521103 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.521113 | orchestrator | Sunday 01 June 2025 04:53:03 +0000 (0:00:00.271) 0:00:04.756 *********** 2025-06-01 04:54:41.521122 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.521135 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.521150 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.521168 | orchestrator | 2025-06-01 04:54:41.521179 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.521190 | orchestrator | Sunday 01 June 2025 04:53:03 +0000 (0:00:00.308) 0:00:05.065 *********** 2025-06-01 04:54:41.521201 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521213 | orchestrator | 2025-06-01 04:54:41.521223 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.521235 | orchestrator | Sunday 01 June 2025 04:53:04 +0000 (0:00:00.372) 0:00:05.438 *********** 2025-06-01 04:54:41.521246 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521257 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.521268 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.521278 | orchestrator | 2025-06-01 04:54:41.521289 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.521300 | orchestrator | Sunday 01 June 2025 04:53:04 +0000 (0:00:00.306) 0:00:05.745 *********** 2025-06-01 04:54:41.521311 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.521322 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.521333 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.521344 | orchestrator | 2025-06-01 04:54:41.521355 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.521366 | orchestrator | Sunday 01 June 2025 04:53:04 +0000 (0:00:00.338) 0:00:06.083 *********** 2025-06-01 04:54:41.521377 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521388 | orchestrator | 2025-06-01 04:54:41.521398 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.521410 | orchestrator | Sunday 01 June 2025 04:53:05 +0000 (0:00:00.129) 0:00:06.212 *********** 2025-06-01 04:54:41.521421 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521433 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.521444 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.521455 | orchestrator | 2025-06-01 04:54:41.521466 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.521477 | orchestrator | Sunday 01 June 2025 04:53:05 +0000 (0:00:00.284) 0:00:06.496 *********** 2025-06-01 04:54:41.521489 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.521498 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.521508 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.521517 | orchestrator | 2025-06-01 04:54:41.521527 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.521536 | orchestrator | Sunday 01 June 2025 04:53:05 +0000 (0:00:00.547) 0:00:07.044 *********** 2025-06-01 04:54:41.521546 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521555 | orchestrator | 2025-06-01 04:54:41.521565 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.521575 | orchestrator | Sunday 01 June 2025 04:53:06 +0000 (0:00:00.132) 0:00:07.177 *********** 2025-06-01 04:54:41.521584 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521594 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.521603 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.521613 | orchestrator | 2025-06-01 04:54:41.521623 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.521632 | orchestrator | Sunday 01 June 2025 04:53:06 +0000 (0:00:00.312) 0:00:07.490 *********** 2025-06-01 04:54:41.521642 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.521651 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.521661 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.521670 | orchestrator | 2025-06-01 04:54:41.521680 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.521689 | orchestrator | Sunday 01 June 2025 04:53:06 +0000 (0:00:00.312) 0:00:07.802 *********** 2025-06-01 04:54:41.521699 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521730 | orchestrator | 2025-06-01 04:54:41.521741 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.521750 | orchestrator | Sunday 01 June 2025 04:53:06 +0000 (0:00:00.125) 0:00:07.927 *********** 2025-06-01 04:54:41.521767 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521776 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.521786 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.521796 | orchestrator | 2025-06-01 04:54:41.521805 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.521815 | orchestrator | Sunday 01 June 2025 04:53:07 +0000 (0:00:00.481) 0:00:08.409 *********** 2025-06-01 04:54:41.521824 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.521834 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.521843 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.521853 | orchestrator | 2025-06-01 04:54:41.521868 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.521878 | orchestrator | Sunday 01 June 2025 04:53:07 +0000 (0:00:00.321) 0:00:08.731 *********** 2025-06-01 04:54:41.521888 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521897 | orchestrator | 2025-06-01 04:54:41.521907 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.521916 | orchestrator | Sunday 01 June 2025 04:53:07 +0000 (0:00:00.127) 0:00:08.858 *********** 2025-06-01 04:54:41.521926 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.521937 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.521953 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.521969 | orchestrator | 2025-06-01 04:54:41.521987 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.522004 | orchestrator | Sunday 01 June 2025 04:53:08 +0000 (0:00:00.291) 0:00:09.150 *********** 2025-06-01 04:54:41.522084 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.522095 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.522104 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.522114 | orchestrator | 2025-06-01 04:54:41.522124 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.522134 | orchestrator | Sunday 01 June 2025 04:53:08 +0000 (0:00:00.276) 0:00:09.427 *********** 2025-06-01 04:54:41.522143 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522153 | orchestrator | 2025-06-01 04:54:41.522162 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.522172 | orchestrator | Sunday 01 June 2025 04:53:08 +0000 (0:00:00.140) 0:00:09.567 *********** 2025-06-01 04:54:41.522181 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522197 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.522207 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.522216 | orchestrator | 2025-06-01 04:54:41.522226 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.522235 | orchestrator | Sunday 01 June 2025 04:53:08 +0000 (0:00:00.525) 0:00:10.093 *********** 2025-06-01 04:54:41.522245 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.522254 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.522264 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.522273 | orchestrator | 2025-06-01 04:54:41.522283 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.522292 | orchestrator | Sunday 01 June 2025 04:53:09 +0000 (0:00:00.341) 0:00:10.435 *********** 2025-06-01 04:54:41.522302 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522311 | orchestrator | 2025-06-01 04:54:41.522321 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.522330 | orchestrator | Sunday 01 June 2025 04:53:09 +0000 (0:00:00.120) 0:00:10.555 *********** 2025-06-01 04:54:41.522340 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522349 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.522359 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.522369 | orchestrator | 2025-06-01 04:54:41.522378 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 04:54:41.522388 | orchestrator | Sunday 01 June 2025 04:53:09 +0000 (0:00:00.286) 0:00:10.842 *********** 2025-06-01 04:54:41.522404 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:54:41.522414 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:54:41.522424 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:54:41.522433 | orchestrator | 2025-06-01 04:54:41.522443 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 04:54:41.522474 | orchestrator | Sunday 01 June 2025 04:53:10 +0000 (0:00:00.529) 0:00:11.372 *********** 2025-06-01 04:54:41.522484 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522494 | orchestrator | 2025-06-01 04:54:41.522503 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 04:54:41.522513 | orchestrator | Sunday 01 June 2025 04:53:10 +0000 (0:00:00.129) 0:00:11.501 *********** 2025-06-01 04:54:41.522523 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522532 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.522542 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.522552 | orchestrator | 2025-06-01 04:54:41.522561 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-01 04:54:41.522571 | orchestrator | Sunday 01 June 2025 04:53:10 +0000 (0:00:00.311) 0:00:11.813 *********** 2025-06-01 04:54:41.522580 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:54:41.522590 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:54:41.522599 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:54:41.522609 | orchestrator | 2025-06-01 04:54:41.522618 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-01 04:54:41.522628 | orchestrator | Sunday 01 June 2025 04:53:12 +0000 (0:00:01.598) 0:00:13.412 *********** 2025-06-01 04:54:41.522638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 04:54:41.522647 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 04:54:41.522657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 04:54:41.522666 | orchestrator | 2025-06-01 04:54:41.522676 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-01 04:54:41.522685 | orchestrator | Sunday 01 June 2025 04:53:14 +0000 (0:00:01.873) 0:00:15.285 *********** 2025-06-01 04:54:41.522695 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 04:54:41.522742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 04:54:41.522762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 04:54:41.522777 | orchestrator | 2025-06-01 04:54:41.522792 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-01 04:54:41.522807 | orchestrator | Sunday 01 June 2025 04:53:16 +0000 (0:00:01.911) 0:00:17.197 *********** 2025-06-01 04:54:41.522832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 04:54:41.522848 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 04:54:41.522865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 04:54:41.522883 | orchestrator | 2025-06-01 04:54:41.522901 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-01 04:54:41.522918 | orchestrator | Sunday 01 June 2025 04:53:17 +0000 (0:00:01.587) 0:00:18.785 *********** 2025-06-01 04:54:41.522932 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.522942 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.522951 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.522961 | orchestrator | 2025-06-01 04:54:41.522970 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-01 04:54:41.522980 | orchestrator | Sunday 01 June 2025 04:53:17 +0000 (0:00:00.292) 0:00:19.077 *********** 2025-06-01 04:54:41.522989 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.523014 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.523024 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.523033 | orchestrator | 2025-06-01 04:54:41.523043 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 04:54:41.523052 | orchestrator | Sunday 01 June 2025 04:53:18 +0000 (0:00:00.303) 0:00:19.381 *********** 2025-06-01 04:54:41.523062 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:54:41.523072 | orchestrator | 2025-06-01 04:54:41.523087 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-01 04:54:41.523096 | orchestrator | Sunday 01 June 2025 04:53:19 +0000 (0:00:00.852) 0:00:20.234 *********** 2025-06-01 04:54:41.523109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.523137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.523156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.523167 | orchestrator | 2025-06-01 04:54:41.523177 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-01 04:54:41.523187 | orchestrator | Sunday 01 June 2025 04:53:20 +0000 (0:00:01.454) 0:00:21.688 *********** 2025-06-01 04:54:41.523211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:54:41.523229 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.523240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:54:41.523256 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.523273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:54:41.523292 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.523302 | orchestrator | 2025-06-01 04:54:41.523312 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-01 04:54:41.523321 | orchestrator | Sunday 01 June 2025 04:53:21 +0000 (0:00:00.702) 0:00:22.391 *********** 2025-06-01 04:54:41.523338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:54:41.523355 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.523371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:54:41.523381 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.523399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 04:54:41.523415 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.523425 | orchestrator | 2025-06-01 04:54:41.523435 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-01 04:54:41.523444 | orchestrator | Sunday 01 June 2025 04:53:22 +0000 (0:00:01.153) 0:00:23.544 *********** 2025-06-01 04:54:41.523460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.523483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.523501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 04:54:41.523512 | orchestrator | 2025-06-01 04:54:41.523522 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 04:54:41.523532 | orchestrator | Sunday 01 June 2025 04:53:23 +0000 (0:00:01.197) 0:00:24.742 *********** 2025-06-01 04:54:41.523541 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:54:41.523551 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:54:41.523561 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:54:41.523570 | orchestrator | 2025-06-01 04:54:41.523586 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 04:54:41.523596 | orchestrator | Sunday 01 June 2025 04:53:23 +0000 (0:00:00.312) 0:00:25.054 *********** 2025-06-01 04:54:41.523606 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:54:41.523615 | orchestrator | 2025-06-01 04:54:41.523625 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-01 04:54:41.523635 | orchestrator | Sunday 01 June 2025 04:53:24 +0000 (0:00:00.796) 0:00:25.851 *********** 2025-06-01 04:54:41.523644 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:54:41.523654 | orchestrator | 2025-06-01 04:54:41.523669 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-01 04:54:41.523679 | orchestrator | Sunday 01 June 2025 04:53:26 +0000 (0:00:02.131) 0:00:27.982 *********** 2025-06-01 04:54:41.523689 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:54:41.523699 | orchestrator | 2025-06-01 04:54:41.523728 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-01 04:54:41.523738 | orchestrator | Sunday 01 June 2025 04:53:28 +0000 (0:00:01.952) 0:00:29.934 *********** 2025-06-01 04:54:41.523748 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:54:41.523757 | orchestrator | 2025-06-01 04:54:41.523767 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 04:54:41.523777 | orchestrator | Sunday 01 June 2025 04:53:43 +0000 (0:00:14.563) 0:00:44.498 *********** 2025-06-01 04:54:41.523786 | orchestrator | 2025-06-01 04:54:41.523796 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 04:54:41.523806 | orchestrator | Sunday 01 June 2025 04:53:43 +0000 (0:00:00.076) 0:00:44.575 *********** 2025-06-01 04:54:41.523815 | orchestrator | 2025-06-01 04:54:41.523825 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 04:54:41.523835 | orchestrator | Sunday 01 June 2025 04:53:43 +0000 (0:00:00.065) 0:00:44.640 *********** 2025-06-01 04:54:41.523845 | orchestrator | 2025-06-01 04:54:41.523854 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-01 04:54:41.523864 | orchestrator | Sunday 01 June 2025 04:53:43 +0000 (0:00:00.065) 0:00:44.706 *********** 2025-06-01 04:54:41.523874 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:54:41.523883 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:54:41.523893 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:54:41.523902 | orchestrator | 2025-06-01 04:54:41.523920 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:54:41.523931 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-01 04:54:41.523941 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-01 04:54:41.523951 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-01 04:54:41.523961 | orchestrator | 2025-06-01 04:54:41.523970 | orchestrator | 2025-06-01 04:54:41.523980 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:54:41.523990 | orchestrator | Sunday 01 June 2025 04:54:40 +0000 (0:00:56.777) 0:01:41.484 *********** 2025-06-01 04:54:41.524000 | orchestrator | =============================================================================== 2025-06-01 04:54:41.524009 | orchestrator | horizon : Restart horizon container ------------------------------------ 56.78s 2025-06-01 04:54:41.524019 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.56s 2025-06-01 04:54:41.524029 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.13s 2025-06-01 04:54:41.524041 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 1.95s 2025-06-01 04:54:41.524057 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.91s 2025-06-01 04:54:41.524082 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.87s 2025-06-01 04:54:41.524106 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.60s 2025-06-01 04:54:41.524122 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2025-06-01 04:54:41.524136 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.45s 2025-06-01 04:54:41.524150 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.20s 2025-06-01 04:54:41.524165 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.15s 2025-06-01 04:54:41.524180 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.98s 2025-06-01 04:54:41.524195 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2025-06-01 04:54:41.524212 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-06-01 04:54:41.524229 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2025-06-01 04:54:41.524245 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2025-06-01 04:54:41.524260 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-06-01 04:54:41.524270 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-06-01 04:54:41.524280 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-06-01 04:54:41.524289 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-06-01 04:54:41.524299 | orchestrator | 2025-06-01 04:54:41 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:41.524308 | orchestrator | 2025-06-01 04:54:41 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:41.524318 | orchestrator | 2025-06-01 04:54:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:44.564826 | orchestrator | 2025-06-01 04:54:44 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:44.566609 | orchestrator | 2025-06-01 04:54:44 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:44.566660 | orchestrator | 2025-06-01 04:54:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:47.613295 | orchestrator | 2025-06-01 04:54:47 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:47.616201 | orchestrator | 2025-06-01 04:54:47 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:47.616245 | orchestrator | 2025-06-01 04:54:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:50.659639 | orchestrator | 2025-06-01 04:54:50 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:50.662560 | orchestrator | 2025-06-01 04:54:50 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:50.662618 | orchestrator | 2025-06-01 04:54:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:53.700334 | orchestrator | 2025-06-01 04:54:53 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:53.700987 | orchestrator | 2025-06-01 04:54:53 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:53.701037 | orchestrator | 2025-06-01 04:54:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:56.748365 | orchestrator | 2025-06-01 04:54:56 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:56.750000 | orchestrator | 2025-06-01 04:54:56 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:56.750148 | orchestrator | 2025-06-01 04:54:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:54:59.793834 | orchestrator | 2025-06-01 04:54:59 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:54:59.796395 | orchestrator | 2025-06-01 04:54:59 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:54:59.796449 | orchestrator | 2025-06-01 04:54:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:02.845151 | orchestrator | 2025-06-01 04:55:02 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:55:02.845252 | orchestrator | 2025-06-01 04:55:02 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:02.845264 | orchestrator | 2025-06-01 04:55:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:05.891920 | orchestrator | 2025-06-01 04:55:05 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:55:05.894219 | orchestrator | 2025-06-01 04:55:05 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:05.894264 | orchestrator | 2025-06-01 04:55:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:08.950560 | orchestrator | 2025-06-01 04:55:08 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:55:08.951363 | orchestrator | 2025-06-01 04:55:08 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:08.951410 | orchestrator | 2025-06-01 04:55:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:11.996630 | orchestrator | 2025-06-01 04:55:11 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state STARTED 2025-06-01 04:55:11.996719 | orchestrator | 2025-06-01 04:55:11 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:11.996729 | orchestrator | 2025-06-01 04:55:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:15.055209 | orchestrator | 2025-06-01 04:55:15 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:15.055995 | orchestrator | 2025-06-01 04:55:15 | INFO  | Task 78369379-7607-47d3-86be-b07fd1a18cf5 is in state SUCCESS 2025-06-01 04:55:15.058394 | orchestrator | 2025-06-01 04:55:15 | INFO  | Task 5c04cdb2-f56b-46d1-9a3d-d0cc4fbc310f is in state STARTED 2025-06-01 04:55:15.060901 | orchestrator | 2025-06-01 04:55:15 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:15.062688 | orchestrator | 2025-06-01 04:55:15 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:15.062962 | orchestrator | 2025-06-01 04:55:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:18.131947 | orchestrator | 2025-06-01 04:55:18 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:18.135567 | orchestrator | 2025-06-01 04:55:18 | INFO  | Task 5c04cdb2-f56b-46d1-9a3d-d0cc4fbc310f is in state STARTED 2025-06-01 04:55:18.135625 | orchestrator | 2025-06-01 04:55:18 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:18.135960 | orchestrator | 2025-06-01 04:55:18 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:18.136299 | orchestrator | 2025-06-01 04:55:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:21.186368 | orchestrator | 2025-06-01 04:55:21 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:21.186478 | orchestrator | 2025-06-01 04:55:21 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:21.186528 | orchestrator | 2025-06-01 04:55:21 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:21.191661 | orchestrator | 2025-06-01 04:55:21 | INFO  | Task 5c04cdb2-f56b-46d1-9a3d-d0cc4fbc310f is in state SUCCESS 2025-06-01 04:55:21.193416 | orchestrator | 2025-06-01 04:55:21 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:21.195038 | orchestrator | 2025-06-01 04:55:21 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:21.195071 | orchestrator | 2025-06-01 04:55:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:24.247495 | orchestrator | 2025-06-01 04:55:24 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:24.248281 | orchestrator | 2025-06-01 04:55:24 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:24.249446 | orchestrator | 2025-06-01 04:55:24 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:24.251784 | orchestrator | 2025-06-01 04:55:24 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:24.253488 | orchestrator | 2025-06-01 04:55:24 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:24.253502 | orchestrator | 2025-06-01 04:55:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:27.281294 | orchestrator | 2025-06-01 04:55:27 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:27.281595 | orchestrator | 2025-06-01 04:55:27 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:27.286288 | orchestrator | 2025-06-01 04:55:27 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:27.286567 | orchestrator | 2025-06-01 04:55:27 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:27.286606 | orchestrator | 2025-06-01 04:55:27 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:27.286621 | orchestrator | 2025-06-01 04:55:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:30.322670 | orchestrator | 2025-06-01 04:55:30 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:30.322956 | orchestrator | 2025-06-01 04:55:30 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:30.322991 | orchestrator | 2025-06-01 04:55:30 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:30.323860 | orchestrator | 2025-06-01 04:55:30 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:30.326001 | orchestrator | 2025-06-01 04:55:30 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:30.326101 | orchestrator | 2025-06-01 04:55:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:33.376477 | orchestrator | 2025-06-01 04:55:33 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:33.380012 | orchestrator | 2025-06-01 04:55:33 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:33.380819 | orchestrator | 2025-06-01 04:55:33 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:33.382785 | orchestrator | 2025-06-01 04:55:33 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:33.388968 | orchestrator | 2025-06-01 04:55:33 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:33.389033 | orchestrator | 2025-06-01 04:55:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:36.434256 | orchestrator | 2025-06-01 04:55:36 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:36.434734 | orchestrator | 2025-06-01 04:55:36 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:36.438438 | orchestrator | 2025-06-01 04:55:36 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:36.440293 | orchestrator | 2025-06-01 04:55:36 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:36.441506 | orchestrator | 2025-06-01 04:55:36 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:36.441538 | orchestrator | 2025-06-01 04:55:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:39.480423 | orchestrator | 2025-06-01 04:55:39 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:39.484648 | orchestrator | 2025-06-01 04:55:39 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:39.484735 | orchestrator | 2025-06-01 04:55:39 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:39.485866 | orchestrator | 2025-06-01 04:55:39 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state STARTED 2025-06-01 04:55:39.487177 | orchestrator | 2025-06-01 04:55:39 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:39.487244 | orchestrator | 2025-06-01 04:55:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:42.530148 | orchestrator | 2025-06-01 04:55:42 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:42.530605 | orchestrator | 2025-06-01 04:55:42 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:42.533541 | orchestrator | 2025-06-01 04:55:42 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:42.534654 | orchestrator | 2025-06-01 04:55:42 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:55:42.536683 | orchestrator | 2025-06-01 04:55:42 | INFO  | Task 4a580b49-28b6-4ebc-87e2-66df2dd724ed is in state SUCCESS 2025-06-01 04:55:42.539247 | orchestrator | 2025-06-01 04:55:42.539324 | orchestrator | 2025-06-01 04:55:42.539801 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-01 04:55:42.539818 | orchestrator | 2025-06-01 04:55:42.539830 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-01 04:55:42.539841 | orchestrator | Sunday 01 June 2025 04:54:21 +0000 (0:00:00.225) 0:00:00.225 *********** 2025-06-01 04:55:42.539853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-01 04:55:42.539865 | orchestrator | 2025-06-01 04:55:42.539877 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-01 04:55:42.539888 | orchestrator | Sunday 01 June 2025 04:54:21 +0000 (0:00:00.205) 0:00:00.430 *********** 2025-06-01 04:55:42.539899 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-01 04:55:42.539910 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-01 04:55:42.539921 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-01 04:55:42.539932 | orchestrator | 2025-06-01 04:55:42.539943 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-01 04:55:42.539954 | orchestrator | Sunday 01 June 2025 04:54:22 +0000 (0:00:01.240) 0:00:01.671 *********** 2025-06-01 04:55:42.539965 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-01 04:55:42.540002 | orchestrator | 2025-06-01 04:55:42.540014 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-01 04:55:42.540025 | orchestrator | Sunday 01 June 2025 04:54:24 +0000 (0:00:01.197) 0:00:02.869 *********** 2025-06-01 04:55:42.540035 | orchestrator | changed: [testbed-manager] 2025-06-01 04:55:42.540046 | orchestrator | 2025-06-01 04:55:42.540057 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-01 04:55:42.540068 | orchestrator | Sunday 01 June 2025 04:54:25 +0000 (0:00:01.077) 0:00:03.946 *********** 2025-06-01 04:55:42.540078 | orchestrator | changed: [testbed-manager] 2025-06-01 04:55:42.540089 | orchestrator | 2025-06-01 04:55:42.540099 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-01 04:55:42.540110 | orchestrator | Sunday 01 June 2025 04:54:26 +0000 (0:00:00.910) 0:00:04.857 *********** 2025-06-01 04:55:42.540120 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-01 04:55:42.540131 | orchestrator | ok: [testbed-manager] 2025-06-01 04:55:42.540142 | orchestrator | 2025-06-01 04:55:42.540152 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-01 04:55:42.540163 | orchestrator | Sunday 01 June 2025 04:55:02 +0000 (0:00:36.859) 0:00:41.717 *********** 2025-06-01 04:55:42.540174 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-01 04:55:42.540185 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-01 04:55:42.540196 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-01 04:55:42.540206 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-01 04:55:42.540217 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-01 04:55:42.540228 | orchestrator | 2025-06-01 04:55:42.540239 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-01 04:55:42.540250 | orchestrator | Sunday 01 June 2025 04:55:07 +0000 (0:00:04.185) 0:00:45.903 *********** 2025-06-01 04:55:42.540260 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-01 04:55:42.540271 | orchestrator | 2025-06-01 04:55:42.540282 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-01 04:55:42.540292 | orchestrator | Sunday 01 June 2025 04:55:07 +0000 (0:00:00.450) 0:00:46.353 *********** 2025-06-01 04:55:42.540303 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:55:42.540314 | orchestrator | 2025-06-01 04:55:42.540324 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-01 04:55:42.540335 | orchestrator | Sunday 01 June 2025 04:55:07 +0000 (0:00:00.115) 0:00:46.469 *********** 2025-06-01 04:55:42.540345 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:55:42.540356 | orchestrator | 2025-06-01 04:55:42.540370 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-01 04:55:42.540383 | orchestrator | Sunday 01 June 2025 04:55:07 +0000 (0:00:00.294) 0:00:46.763 *********** 2025-06-01 04:55:42.540395 | orchestrator | changed: [testbed-manager] 2025-06-01 04:55:42.540407 | orchestrator | 2025-06-01 04:55:42.540419 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-01 04:55:42.540432 | orchestrator | Sunday 01 June 2025 04:55:09 +0000 (0:00:01.440) 0:00:48.204 *********** 2025-06-01 04:55:42.540444 | orchestrator | changed: [testbed-manager] 2025-06-01 04:55:42.540456 | orchestrator | 2025-06-01 04:55:42.540468 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-01 04:55:42.540480 | orchestrator | Sunday 01 June 2025 04:55:10 +0000 (0:00:00.965) 0:00:49.169 *********** 2025-06-01 04:55:42.540493 | orchestrator | changed: [testbed-manager] 2025-06-01 04:55:42.540505 | orchestrator | 2025-06-01 04:55:42.540527 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-01 04:55:42.540540 | orchestrator | Sunday 01 June 2025 04:55:10 +0000 (0:00:00.597) 0:00:49.767 *********** 2025-06-01 04:55:42.540552 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-01 04:55:42.540572 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-01 04:55:42.540585 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-01 04:55:42.540598 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-01 04:55:42.540610 | orchestrator | 2025-06-01 04:55:42.540623 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:55:42.540636 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 04:55:42.540650 | orchestrator | 2025-06-01 04:55:42.540662 | orchestrator | 2025-06-01 04:55:42.540717 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:55:42.540732 | orchestrator | Sunday 01 June 2025 04:55:12 +0000 (0:00:01.452) 0:00:51.220 *********** 2025-06-01 04:55:42.540743 | orchestrator | =============================================================================== 2025-06-01 04:55:42.540814 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.86s 2025-06-01 04:55:42.540829 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.19s 2025-06-01 04:55:42.540840 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2025-06-01 04:55:42.540850 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.44s 2025-06-01 04:55:42.540861 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.24s 2025-06-01 04:55:42.540872 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.20s 2025-06-01 04:55:42.540883 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.08s 2025-06-01 04:55:42.540894 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.97s 2025-06-01 04:55:42.540904 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-06-01 04:55:42.540915 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-06-01 04:55:42.540926 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-06-01 04:55:42.540936 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-01 04:55:42.540947 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-06-01 04:55:42.540958 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-06-01 04:55:42.540969 | orchestrator | 2025-06-01 04:55:42.540980 | orchestrator | 2025-06-01 04:55:42.540991 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:55:42.541001 | orchestrator | 2025-06-01 04:55:42.541011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:55:42.541020 | orchestrator | Sunday 01 June 2025 04:55:16 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-01 04:55:42.541030 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.541040 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.541049 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.541059 | orchestrator | 2025-06-01 04:55:42.541069 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:55:42.541078 | orchestrator | Sunday 01 June 2025 04:55:17 +0000 (0:00:00.312) 0:00:00.486 *********** 2025-06-01 04:55:42.541088 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-01 04:55:42.541098 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-01 04:55:42.541107 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-01 04:55:42.541117 | orchestrator | 2025-06-01 04:55:42.541126 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-01 04:55:42.541136 | orchestrator | 2025-06-01 04:55:42.541145 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-01 04:55:42.541155 | orchestrator | Sunday 01 June 2025 04:55:17 +0000 (0:00:00.686) 0:00:01.173 *********** 2025-06-01 04:55:42.541165 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.541174 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.541191 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.541201 | orchestrator | 2025-06-01 04:55:42.541210 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:55:42.541220 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:55:42.541230 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:55:42.541314 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:55:42.541327 | orchestrator | 2025-06-01 04:55:42.541336 | orchestrator | 2025-06-01 04:55:42.541346 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:55:42.541356 | orchestrator | Sunday 01 June 2025 04:55:18 +0000 (0:00:00.808) 0:00:01.982 *********** 2025-06-01 04:55:42.541366 | orchestrator | =============================================================================== 2025-06-01 04:55:42.541375 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.81s 2025-06-01 04:55:42.541385 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-06-01 04:55:42.541394 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-01 04:55:42.541404 | orchestrator | 2025-06-01 04:55:42.541414 | orchestrator | 2025-06-01 04:55:42.541423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:55:42.541433 | orchestrator | 2025-06-01 04:55:42.541449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:55:42.541459 | orchestrator | Sunday 01 June 2025 04:52:59 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-06-01 04:55:42.541468 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.541478 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.541488 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.541497 | orchestrator | 2025-06-01 04:55:42.541507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:55:42.541516 | orchestrator | Sunday 01 June 2025 04:52:59 +0000 (0:00:00.256) 0:00:00.489 *********** 2025-06-01 04:55:42.541526 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-01 04:55:42.541536 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-01 04:55:42.541546 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-01 04:55:42.541556 | orchestrator | 2025-06-01 04:55:42.541565 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-01 04:55:42.541575 | orchestrator | 2025-06-01 04:55:42.541619 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 04:55:42.541631 | orchestrator | Sunday 01 June 2025 04:52:59 +0000 (0:00:00.350) 0:00:00.840 *********** 2025-06-01 04:55:42.541641 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:55:42.541651 | orchestrator | 2025-06-01 04:55:42.541660 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-01 04:55:42.541670 | orchestrator | Sunday 01 June 2025 04:53:00 +0000 (0:00:00.475) 0:00:01.315 *********** 2025-06-01 04:55:42.541683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.541705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.541722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.541734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.541792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.541806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.541823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.541834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.541844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.541854 | orchestrator | 2025-06-01 04:55:42.541864 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-01 04:55:42.541874 | orchestrator | Sunday 01 June 2025 04:53:01 +0000 (0:00:01.594) 0:00:02.910 *********** 2025-06-01 04:55:42.541884 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-01 04:55:42.541894 | orchestrator | 2025-06-01 04:55:42.541903 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-01 04:55:42.541917 | orchestrator | Sunday 01 June 2025 04:53:02 +0000 (0:00:00.738) 0:00:03.648 *********** 2025-06-01 04:55:42.541927 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.541937 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.541946 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.541958 | orchestrator | 2025-06-01 04:55:42.541969 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-01 04:55:42.541980 | orchestrator | Sunday 01 June 2025 04:53:02 +0000 (0:00:00.372) 0:00:04.021 *********** 2025-06-01 04:55:42.541991 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:55:42.542003 | orchestrator | 2025-06-01 04:55:42.542055 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 04:55:42.542069 | orchestrator | Sunday 01 June 2025 04:53:03 +0000 (0:00:00.626) 0:00:04.648 *********** 2025-06-01 04:55:42.542081 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:55:42.542092 | orchestrator | 2025-06-01 04:55:42.542110 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-01 04:55:42.542121 | orchestrator | Sunday 01 June 2025 04:53:04 +0000 (0:00:00.576) 0:00:05.225 *********** 2025-06-01 04:55:42.542135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542276 | orchestrator | 2025-06-01 04:55:42.542286 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-01 04:55:42.542295 | orchestrator | Sunday 01 June 2025 04:53:07 +0000 (0:00:03.488) 0:00:08.713 *********** 2025-06-01 04:55:42.542310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:55:42.542327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:55:42.542353 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.542363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:55:42.542374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:55:42.542395 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.542419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:55:42.542438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:55:42.542458 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.542468 | orchestrator | 2025-06-01 04:55:42.542478 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-01 04:55:42.542487 | orchestrator | Sunday 01 June 2025 04:53:08 +0000 (0:00:00.575) 0:00:09.289 *********** 2025-06-01 04:55:42.542498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:55:42.542508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:55:42.542538 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.542555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:55:42.542566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:55:42.542586 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.542597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 04:55:42.542611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 04:55:42.542643 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.542653 | orchestrator | 2025-06-01 04:55:42.542663 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-01 04:55:42.542672 | orchestrator | Sunday 01 June 2025 04:53:09 +0000 (0:00:00.752) 0:00:10.041 *********** 2025-06-01 04:55:42.542683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542816 | orchestrator | 2025-06-01 04:55:42.542826 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-01 04:55:42.542836 | orchestrator | Sunday 01 June 2025 04:53:12 +0000 (0:00:03.599) 0:00:13.641 *********** 2025-06-01 04:55:42.542857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.542921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.542937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.542967 | orchestrator | 2025-06-01 04:55:42.542977 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-01 04:55:42.542987 | orchestrator | Sunday 01 June 2025 04:53:17 +0000 (0:00:04.724) 0:00:18.366 *********** 2025-06-01 04:55:42.542997 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.543007 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:55:42.543017 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:55:42.543026 | orchestrator | 2025-06-01 04:55:42.543036 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-01 04:55:42.543045 | orchestrator | Sunday 01 June 2025 04:53:18 +0000 (0:00:01.386) 0:00:19.753 *********** 2025-06-01 04:55:42.543055 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.543070 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.543080 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.543090 | orchestrator | 2025-06-01 04:55:42.543099 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-01 04:55:42.543109 | orchestrator | Sunday 01 June 2025 04:53:19 +0000 (0:00:00.523) 0:00:20.276 *********** 2025-06-01 04:55:42.543118 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.543128 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.543137 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.543147 | orchestrator | 2025-06-01 04:55:42.543156 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-01 04:55:42.543166 | orchestrator | Sunday 01 June 2025 04:53:19 +0000 (0:00:00.518) 0:00:20.795 *********** 2025-06-01 04:55:42.543175 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.543184 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.543194 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.543203 | orchestrator | 2025-06-01 04:55:42.543213 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-01 04:55:42.543222 | orchestrator | Sunday 01 June 2025 04:53:20 +0000 (0:00:00.299) 0:00:21.094 *********** 2025-06-01 04:55:42.543236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.543252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.543264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.543274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.543290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.543307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 04:55:42.543324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.543335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.543345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.543363 | orchestrator | 2025-06-01 04:55:42.543373 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 04:55:42.543382 | orchestrator | Sunday 01 June 2025 04:53:22 +0000 (0:00:02.408) 0:00:23.503 *********** 2025-06-01 04:55:42.543392 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.543402 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.543411 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.543421 | orchestrator | 2025-06-01 04:55:42.543430 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-01 04:55:42.543440 | orchestrator | Sunday 01 June 2025 04:53:22 +0000 (0:00:00.296) 0:00:23.799 *********** 2025-06-01 04:55:42.543449 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 04:55:42.543459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 04:55:42.543469 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 04:55:42.543479 | orchestrator | 2025-06-01 04:55:42.543489 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-01 04:55:42.543498 | orchestrator | Sunday 01 June 2025 04:53:24 +0000 (0:00:02.130) 0:00:25.930 *********** 2025-06-01 04:55:42.543508 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:55:42.543518 | orchestrator | 2025-06-01 04:55:42.543527 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-01 04:55:42.543537 | orchestrator | Sunday 01 June 2025 04:53:25 +0000 (0:00:00.948) 0:00:26.879 *********** 2025-06-01 04:55:42.543546 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.543556 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.543565 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.543575 | orchestrator | 2025-06-01 04:55:42.543584 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-01 04:55:42.543594 | orchestrator | Sunday 01 June 2025 04:53:26 +0000 (0:00:00.553) 0:00:27.433 *********** 2025-06-01 04:55:42.543603 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 04:55:42.543613 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:55:42.543623 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 04:55:42.543632 | orchestrator | 2025-06-01 04:55:42.543641 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-01 04:55:42.543651 | orchestrator | Sunday 01 June 2025 04:53:27 +0000 (0:00:01.084) 0:00:28.517 *********** 2025-06-01 04:55:42.543661 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.543670 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.543680 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.543689 | orchestrator | 2025-06-01 04:55:42.543703 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-01 04:55:42.543713 | orchestrator | Sunday 01 June 2025 04:53:27 +0000 (0:00:00.288) 0:00:28.806 *********** 2025-06-01 04:55:42.543722 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 04:55:42.543732 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 04:55:42.543741 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 04:55:42.543751 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 04:55:42.543777 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 04:55:42.543792 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 04:55:42.543803 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 04:55:42.543812 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 04:55:42.543829 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 04:55:42.543838 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 04:55:42.543847 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 04:55:42.543857 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 04:55:42.543866 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 04:55:42.543876 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 04:55:42.543885 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 04:55:42.543895 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 04:55:42.543905 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 04:55:42.543914 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 04:55:42.543923 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 04:55:42.543933 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 04:55:42.543942 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 04:55:42.543952 | orchestrator | 2025-06-01 04:55:42.543961 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-01 04:55:42.543971 | orchestrator | Sunday 01 June 2025 04:53:36 +0000 (0:00:08.838) 0:00:37.644 *********** 2025-06-01 04:55:42.543980 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 04:55:42.543990 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 04:55:42.544000 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 04:55:42.544009 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 04:55:42.544019 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 04:55:42.544028 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 04:55:42.544037 | orchestrator | 2025-06-01 04:55:42.544047 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-01 04:55:42.544056 | orchestrator | Sunday 01 June 2025 04:53:39 +0000 (0:00:02.614) 0:00:40.258 *********** 2025-06-01 04:55:42.544070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.544088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.544105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 04:55:42.544116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.544126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.544136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 04:55:42.544146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.544234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.544255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 04:55:42.544266 | orchestrator | 2025-06-01 04:55:42.544276 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 04:55:42.544286 | orchestrator | Sunday 01 June 2025 04:53:41 +0000 (0:00:02.250) 0:00:42.508 *********** 2025-06-01 04:55:42.544296 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.544306 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.544316 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.544326 | orchestrator | 2025-06-01 04:55:42.544336 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-01 04:55:42.544345 | orchestrator | Sunday 01 June 2025 04:53:41 +0000 (0:00:00.298) 0:00:42.807 *********** 2025-06-01 04:55:42.544355 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544364 | orchestrator | 2025-06-01 04:55:42.544374 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-01 04:55:42.544383 | orchestrator | Sunday 01 June 2025 04:53:43 +0000 (0:00:02.217) 0:00:45.024 *********** 2025-06-01 04:55:42.544393 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544402 | orchestrator | 2025-06-01 04:55:42.544412 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-01 04:55:42.544422 | orchestrator | Sunday 01 June 2025 04:53:46 +0000 (0:00:02.607) 0:00:47.632 *********** 2025-06-01 04:55:42.544431 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.544441 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.544451 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.544460 | orchestrator | 2025-06-01 04:55:42.544470 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-01 04:55:42.544479 | orchestrator | Sunday 01 June 2025 04:53:47 +0000 (0:00:00.865) 0:00:48.497 *********** 2025-06-01 04:55:42.544489 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.544498 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.544508 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.544517 | orchestrator | 2025-06-01 04:55:42.544527 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-01 04:55:42.544537 | orchestrator | Sunday 01 June 2025 04:53:47 +0000 (0:00:00.500) 0:00:48.998 *********** 2025-06-01 04:55:42.544546 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.544556 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.544565 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.544581 | orchestrator | 2025-06-01 04:55:42.544590 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-01 04:55:42.544600 | orchestrator | Sunday 01 June 2025 04:53:48 +0000 (0:00:00.361) 0:00:49.360 *********** 2025-06-01 04:55:42.544609 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544619 | orchestrator | 2025-06-01 04:55:42.544629 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-01 04:55:42.544638 | orchestrator | Sunday 01 June 2025 04:54:01 +0000 (0:00:12.945) 0:01:02.306 *********** 2025-06-01 04:55:42.544648 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544657 | orchestrator | 2025-06-01 04:55:42.544667 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 04:55:42.544676 | orchestrator | Sunday 01 June 2025 04:54:10 +0000 (0:00:09.095) 0:01:11.401 *********** 2025-06-01 04:55:42.544686 | orchestrator | 2025-06-01 04:55:42.544696 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 04:55:42.544705 | orchestrator | Sunday 01 June 2025 04:54:10 +0000 (0:00:00.286) 0:01:11.687 *********** 2025-06-01 04:55:42.544715 | orchestrator | 2025-06-01 04:55:42.544724 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 04:55:42.544734 | orchestrator | Sunday 01 June 2025 04:54:10 +0000 (0:00:00.065) 0:01:11.753 *********** 2025-06-01 04:55:42.544743 | orchestrator | 2025-06-01 04:55:42.544753 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-01 04:55:42.544784 | orchestrator | Sunday 01 June 2025 04:54:10 +0000 (0:00:00.060) 0:01:11.814 *********** 2025-06-01 04:55:42.544794 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544804 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:55:42.544813 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:55:42.544823 | orchestrator | 2025-06-01 04:55:42.544832 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-01 04:55:42.544842 | orchestrator | Sunday 01 June 2025 04:54:36 +0000 (0:00:25.458) 0:01:37.272 *********** 2025-06-01 04:55:42.544852 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:55:42.544861 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544871 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:55:42.544880 | orchestrator | 2025-06-01 04:55:42.544890 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-01 04:55:42.544899 | orchestrator | Sunday 01 June 2025 04:54:45 +0000 (0:00:09.715) 0:01:46.987 *********** 2025-06-01 04:55:42.544909 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.544919 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:55:42.544936 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:55:42.544945 | orchestrator | 2025-06-01 04:55:42.544955 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 04:55:42.544965 | orchestrator | Sunday 01 June 2025 04:54:57 +0000 (0:00:11.321) 0:01:58.309 *********** 2025-06-01 04:55:42.544974 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:55:42.544984 | orchestrator | 2025-06-01 04:55:42.544994 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-01 04:55:42.545003 | orchestrator | Sunday 01 June 2025 04:54:58 +0000 (0:00:00.783) 0:01:59.092 *********** 2025-06-01 04:55:42.545013 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.545022 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:55:42.545031 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:55:42.545041 | orchestrator | 2025-06-01 04:55:42.545050 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-01 04:55:42.545060 | orchestrator | Sunday 01 June 2025 04:54:58 +0000 (0:00:00.696) 0:01:59.789 *********** 2025-06-01 04:55:42.545069 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:55:42.545079 | orchestrator | 2025-06-01 04:55:42.545089 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-01 04:55:42.545101 | orchestrator | Sunday 01 June 2025 04:55:00 +0000 (0:00:01.773) 0:02:01.563 *********** 2025-06-01 04:55:42.545124 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-01 04:55:42.545134 | orchestrator | 2025-06-01 04:55:42.545144 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-01 04:55:42.545153 | orchestrator | Sunday 01 June 2025 04:55:10 +0000 (0:00:09.836) 0:02:11.399 *********** 2025-06-01 04:55:42.545163 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-01 04:55:42.545172 | orchestrator | 2025-06-01 04:55:42.545182 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-01 04:55:42.545192 | orchestrator | Sunday 01 June 2025 04:55:30 +0000 (0:00:20.106) 0:02:31.505 *********** 2025-06-01 04:55:42.545201 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-01 04:55:42.545211 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-01 04:55:42.545221 | orchestrator | 2025-06-01 04:55:42.545230 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-01 04:55:42.545240 | orchestrator | Sunday 01 June 2025 04:55:36 +0000 (0:00:05.686) 0:02:37.192 *********** 2025-06-01 04:55:42.545250 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.545259 | orchestrator | 2025-06-01 04:55:42.545268 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-01 04:55:42.545278 | orchestrator | Sunday 01 June 2025 04:55:36 +0000 (0:00:00.642) 0:02:37.835 *********** 2025-06-01 04:55:42.545288 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.545297 | orchestrator | 2025-06-01 04:55:42.545307 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-01 04:55:42.545317 | orchestrator | Sunday 01 June 2025 04:55:37 +0000 (0:00:00.204) 0:02:38.039 *********** 2025-06-01 04:55:42.545326 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.545335 | orchestrator | 2025-06-01 04:55:42.545345 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-01 04:55:42.545355 | orchestrator | Sunday 01 June 2025 04:55:37 +0000 (0:00:00.136) 0:02:38.176 *********** 2025-06-01 04:55:42.545364 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.545374 | orchestrator | 2025-06-01 04:55:42.545383 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-01 04:55:42.545393 | orchestrator | Sunday 01 June 2025 04:55:37 +0000 (0:00:00.296) 0:02:38.472 *********** 2025-06-01 04:55:42.545403 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:55:42.545412 | orchestrator | 2025-06-01 04:55:42.545422 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 04:55:42.545431 | orchestrator | Sunday 01 June 2025 04:55:40 +0000 (0:00:02.673) 0:02:41.146 *********** 2025-06-01 04:55:42.545441 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:55:42.545450 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:55:42.545460 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:55:42.545469 | orchestrator | 2025-06-01 04:55:42.545479 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:55:42.545489 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-01 04:55:42.545499 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-01 04:55:42.545513 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-01 04:55:42.545523 | orchestrator | 2025-06-01 04:55:42.545532 | orchestrator | 2025-06-01 04:55:42.545542 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:55:42.545552 | orchestrator | Sunday 01 June 2025 04:55:40 +0000 (0:00:00.507) 0:02:41.653 *********** 2025-06-01 04:55:42.545561 | orchestrator | =============================================================================== 2025-06-01 04:55:42.545576 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.46s 2025-06-01 04:55:42.545586 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.11s 2025-06-01 04:55:42.545596 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.95s 2025-06-01 04:55:42.545606 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.32s 2025-06-01 04:55:42.545615 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.84s 2025-06-01 04:55:42.545630 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.72s 2025-06-01 04:55:42.545640 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.10s 2025-06-01 04:55:42.545650 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.84s 2025-06-01 04:55:42.545659 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.69s 2025-06-01 04:55:42.545669 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.72s 2025-06-01 04:55:42.545678 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.60s 2025-06-01 04:55:42.545688 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.49s 2025-06-01 04:55:42.545697 | orchestrator | keystone : Creating default user role ----------------------------------- 2.67s 2025-06-01 04:55:42.545707 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.61s 2025-06-01 04:55:42.545716 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.61s 2025-06-01 04:55:42.545726 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.41s 2025-06-01 04:55:42.545735 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2025-06-01 04:55:42.545745 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.22s 2025-06-01 04:55:42.545767 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.13s 2025-06-01 04:55:42.545777 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2025-06-01 04:55:42.545787 | orchestrator | 2025-06-01 04:55:42 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:42.545797 | orchestrator | 2025-06-01 04:55:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:45.606075 | orchestrator | 2025-06-01 04:55:45 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:45.606364 | orchestrator | 2025-06-01 04:55:45 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:45.607061 | orchestrator | 2025-06-01 04:55:45 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:45.607894 | orchestrator | 2025-06-01 04:55:45 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:55:45.608620 | orchestrator | 2025-06-01 04:55:45 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:45.608647 | orchestrator | 2025-06-01 04:55:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:48.637558 | orchestrator | 2025-06-01 04:55:48 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:48.639448 | orchestrator | 2025-06-01 04:55:48 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:48.639488 | orchestrator | 2025-06-01 04:55:48 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:48.639495 | orchestrator | 2025-06-01 04:55:48 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:55:48.639557 | orchestrator | 2025-06-01 04:55:48 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:48.639716 | orchestrator | 2025-06-01 04:55:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:51.674006 | orchestrator | 2025-06-01 04:55:51 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:51.677039 | orchestrator | 2025-06-01 04:55:51 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:51.677414 | orchestrator | 2025-06-01 04:55:51 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:51.680593 | orchestrator | 2025-06-01 04:55:51 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:55:51.681378 | orchestrator | 2025-06-01 04:55:51 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:51.681467 | orchestrator | 2025-06-01 04:55:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:54.704726 | orchestrator | 2025-06-01 04:55:54 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:54.705969 | orchestrator | 2025-06-01 04:55:54 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state STARTED 2025-06-01 04:55:54.706322 | orchestrator | 2025-06-01 04:55:54 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:54.706891 | orchestrator | 2025-06-01 04:55:54 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:55:54.707423 | orchestrator | 2025-06-01 04:55:54 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:54.707445 | orchestrator | 2025-06-01 04:55:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:55:57.744145 | orchestrator | 2025-06-01 04:55:57 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:55:57.744237 | orchestrator | 2025-06-01 04:55:57 | INFO  | Task b0dd6e45-8cce-4711-955c-5e401434548f is in state SUCCESS 2025-06-01 04:55:57.745454 | orchestrator | 2025-06-01 04:55:57 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:55:57.745882 | orchestrator | 2025-06-01 04:55:57 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:55:57.746827 | orchestrator | 2025-06-01 04:55:57 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:55:57.748073 | orchestrator | 2025-06-01 04:55:57 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:55:57.748128 | orchestrator | 2025-06-01 04:55:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:00.787325 | orchestrator | 2025-06-01 04:56:00 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:00.787420 | orchestrator | 2025-06-01 04:56:00 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:00.788906 | orchestrator | 2025-06-01 04:56:00 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:00.789601 | orchestrator | 2025-06-01 04:56:00 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:00.791827 | orchestrator | 2025-06-01 04:56:00 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:00.791853 | orchestrator | 2025-06-01 04:56:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:03.828117 | orchestrator | 2025-06-01 04:56:03 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:03.828229 | orchestrator | 2025-06-01 04:56:03 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:03.828245 | orchestrator | 2025-06-01 04:56:03 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:03.828565 | orchestrator | 2025-06-01 04:56:03 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:03.829462 | orchestrator | 2025-06-01 04:56:03 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:03.829515 | orchestrator | 2025-06-01 04:56:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:06.869043 | orchestrator | 2025-06-01 04:56:06 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:06.871405 | orchestrator | 2025-06-01 04:56:06 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:06.872283 | orchestrator | 2025-06-01 04:56:06 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:06.873053 | orchestrator | 2025-06-01 04:56:06 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:06.874148 | orchestrator | 2025-06-01 04:56:06 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:06.874177 | orchestrator | 2025-06-01 04:56:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:09.915910 | orchestrator | 2025-06-01 04:56:09 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:09.916743 | orchestrator | 2025-06-01 04:56:09 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:09.917331 | orchestrator | 2025-06-01 04:56:09 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:09.919488 | orchestrator | 2025-06-01 04:56:09 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:09.919836 | orchestrator | 2025-06-01 04:56:09 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:09.919851 | orchestrator | 2025-06-01 04:56:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:12.949205 | orchestrator | 2025-06-01 04:56:12 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:12.949320 | orchestrator | 2025-06-01 04:56:12 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:12.949336 | orchestrator | 2025-06-01 04:56:12 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:12.949716 | orchestrator | 2025-06-01 04:56:12 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:12.950260 | orchestrator | 2025-06-01 04:56:12 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:12.950291 | orchestrator | 2025-06-01 04:56:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:15.978394 | orchestrator | 2025-06-01 04:56:15 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:15.978673 | orchestrator | 2025-06-01 04:56:15 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:15.979243 | orchestrator | 2025-06-01 04:56:15 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:15.980615 | orchestrator | 2025-06-01 04:56:15 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:15.980662 | orchestrator | 2025-06-01 04:56:15 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:15.980684 | orchestrator | 2025-06-01 04:56:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:19.021020 | orchestrator | 2025-06-01 04:56:19 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:19.021141 | orchestrator | 2025-06-01 04:56:19 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:19.021153 | orchestrator | 2025-06-01 04:56:19 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:19.022408 | orchestrator | 2025-06-01 04:56:19 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:19.023638 | orchestrator | 2025-06-01 04:56:19 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:19.023679 | orchestrator | 2025-06-01 04:56:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:22.049412 | orchestrator | 2025-06-01 04:56:22 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:22.049522 | orchestrator | 2025-06-01 04:56:22 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:22.049963 | orchestrator | 2025-06-01 04:56:22 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:22.050431 | orchestrator | 2025-06-01 04:56:22 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:22.051226 | orchestrator | 2025-06-01 04:56:22 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:22.051257 | orchestrator | 2025-06-01 04:56:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:25.085180 | orchestrator | 2025-06-01 04:56:25 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:25.085285 | orchestrator | 2025-06-01 04:56:25 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:25.085963 | orchestrator | 2025-06-01 04:56:25 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:25.085984 | orchestrator | 2025-06-01 04:56:25 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:25.086415 | orchestrator | 2025-06-01 04:56:25 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:25.086436 | orchestrator | 2025-06-01 04:56:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:28.117847 | orchestrator | 2025-06-01 04:56:28 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:28.117961 | orchestrator | 2025-06-01 04:56:28 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:28.118517 | orchestrator | 2025-06-01 04:56:28 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:28.119047 | orchestrator | 2025-06-01 04:56:28 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:28.119891 | orchestrator | 2025-06-01 04:56:28 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:28.119948 | orchestrator | 2025-06-01 04:56:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:31.148401 | orchestrator | 2025-06-01 04:56:31 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:31.148514 | orchestrator | 2025-06-01 04:56:31 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:31.148973 | orchestrator | 2025-06-01 04:56:31 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:31.149650 | orchestrator | 2025-06-01 04:56:31 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:31.150300 | orchestrator | 2025-06-01 04:56:31 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:31.150453 | orchestrator | 2025-06-01 04:56:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:34.180899 | orchestrator | 2025-06-01 04:56:34 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state STARTED 2025-06-01 04:56:34.181005 | orchestrator | 2025-06-01 04:56:34 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:34.181569 | orchestrator | 2025-06-01 04:56:34 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:34.182152 | orchestrator | 2025-06-01 04:56:34 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:34.182878 | orchestrator | 2025-06-01 04:56:34 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:34.182915 | orchestrator | 2025-06-01 04:56:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:37.217070 | orchestrator | 2025-06-01 04:56:37.217179 | orchestrator | 2025-06-01 04:56:37.217196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:56:37.217209 | orchestrator | 2025-06-01 04:56:37.217221 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:56:37.217233 | orchestrator | Sunday 01 June 2025 04:55:25 +0000 (0:00:00.279) 0:00:00.279 *********** 2025-06-01 04:56:37.217244 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:56:37.217360 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:56:37.217377 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:56:37.217389 | orchestrator | ok: [testbed-manager] 2025-06-01 04:56:37.217400 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:56:37.217411 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:56:37.217422 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:56:37.217434 | orchestrator | 2025-06-01 04:56:37.217446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:56:37.217458 | orchestrator | Sunday 01 June 2025 04:55:26 +0000 (0:00:00.899) 0:00:01.178 *********** 2025-06-01 04:56:37.217469 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217480 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217491 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217503 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217514 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217525 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217536 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-01 04:56:37.217547 | orchestrator | 2025-06-01 04:56:37.217558 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-01 04:56:37.217569 | orchestrator | 2025-06-01 04:56:37.217580 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-01 04:56:37.217591 | orchestrator | Sunday 01 June 2025 04:55:27 +0000 (0:00:01.412) 0:00:02.590 *********** 2025-06-01 04:56:37.217604 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:56:37.217617 | orchestrator | 2025-06-01 04:56:37.217628 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-01 04:56:37.217639 | orchestrator | Sunday 01 June 2025 04:55:28 +0000 (0:00:01.297) 0:00:03.888 *********** 2025-06-01 04:56:37.217650 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-01 04:56:37.217661 | orchestrator | 2025-06-01 04:56:37.217672 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-01 04:56:37.217683 | orchestrator | Sunday 01 June 2025 04:55:32 +0000 (0:00:03.608) 0:00:07.496 *********** 2025-06-01 04:56:37.217695 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-01 04:56:37.217731 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-01 04:56:37.217743 | orchestrator | 2025-06-01 04:56:37.217754 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-01 04:56:37.217765 | orchestrator | Sunday 01 June 2025 04:55:38 +0000 (0:00:05.710) 0:00:13.206 *********** 2025-06-01 04:56:37.217950 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 04:56:37.217971 | orchestrator | 2025-06-01 04:56:37.217983 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-01 04:56:37.217994 | orchestrator | Sunday 01 June 2025 04:55:40 +0000 (0:00:02.605) 0:00:15.812 *********** 2025-06-01 04:56:37.218005 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 04:56:37.218071 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-01 04:56:37.218085 | orchestrator | 2025-06-01 04:56:37.218096 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-01 04:56:37.218107 | orchestrator | Sunday 01 June 2025 04:55:44 +0000 (0:00:03.510) 0:00:19.322 *********** 2025-06-01 04:56:37.218118 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 04:56:37.218129 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-01 04:56:37.218139 | orchestrator | 2025-06-01 04:56:37.218150 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-01 04:56:37.218161 | orchestrator | Sunday 01 June 2025 04:55:50 +0000 (0:00:05.960) 0:00:25.283 *********** 2025-06-01 04:56:37.218171 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-01 04:56:37.218182 | orchestrator | 2025-06-01 04:56:37.218192 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:56:37.218203 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218215 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218226 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218237 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218248 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218279 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218291 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.218302 | orchestrator | 2025-06-01 04:56:37.218313 | orchestrator | 2025-06-01 04:56:37.218324 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:56:37.218335 | orchestrator | Sunday 01 June 2025 04:55:55 +0000 (0:00:05.737) 0:00:31.021 *********** 2025-06-01 04:56:37.218346 | orchestrator | =============================================================================== 2025-06-01 04:56:37.218356 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.96s 2025-06-01 04:56:37.218367 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.74s 2025-06-01 04:56:37.218378 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.71s 2025-06-01 04:56:37.218389 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.61s 2025-06-01 04:56:37.218399 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.51s 2025-06-01 04:56:37.218410 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.61s 2025-06-01 04:56:37.218432 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2025-06-01 04:56:37.218443 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.30s 2025-06-01 04:56:37.218454 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-06-01 04:56:37.218465 | orchestrator | 2025-06-01 04:56:37.218485 | orchestrator | 2025-06-01 04:56:37.218504 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-01 04:56:37.218523 | orchestrator | 2025-06-01 04:56:37.218541 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-01 04:56:37.218559 | orchestrator | Sunday 01 June 2025 04:55:16 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-01 04:56:37.218708 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.218724 | orchestrator | 2025-06-01 04:56:37.218737 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-01 04:56:37.218750 | orchestrator | Sunday 01 June 2025 04:55:18 +0000 (0:00:02.048) 0:00:02.324 *********** 2025-06-01 04:56:37.218763 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.218776 | orchestrator | 2025-06-01 04:56:37.218788 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-01 04:56:37.218857 | orchestrator | Sunday 01 June 2025 04:55:20 +0000 (0:00:01.084) 0:00:03.409 *********** 2025-06-01 04:56:37.218870 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.218882 | orchestrator | 2025-06-01 04:56:37.218896 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-01 04:56:37.218907 | orchestrator | Sunday 01 June 2025 04:55:21 +0000 (0:00:01.144) 0:00:04.553 *********** 2025-06-01 04:56:37.218918 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.218929 | orchestrator | 2025-06-01 04:56:37.218939 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-01 04:56:37.218950 | orchestrator | Sunday 01 June 2025 04:55:22 +0000 (0:00:01.283) 0:00:05.836 *********** 2025-06-01 04:56:37.218961 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.218972 | orchestrator | 2025-06-01 04:56:37.218983 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-01 04:56:37.219003 | orchestrator | Sunday 01 June 2025 04:55:23 +0000 (0:00:01.378) 0:00:07.215 *********** 2025-06-01 04:56:37.219015 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.219025 | orchestrator | 2025-06-01 04:56:37.219036 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-01 04:56:37.219047 | orchestrator | Sunday 01 June 2025 04:55:24 +0000 (0:00:00.919) 0:00:08.134 *********** 2025-06-01 04:56:37.219058 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.219068 | orchestrator | 2025-06-01 04:56:37.219079 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-01 04:56:37.219090 | orchestrator | Sunday 01 June 2025 04:55:25 +0000 (0:00:01.035) 0:00:09.170 *********** 2025-06-01 04:56:37.219100 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.219111 | orchestrator | 2025-06-01 04:56:37.219122 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-01 04:56:37.219133 | orchestrator | Sunday 01 June 2025 04:55:26 +0000 (0:00:01.028) 0:00:10.198 *********** 2025-06-01 04:56:37.219144 | orchestrator | changed: [testbed-manager] 2025-06-01 04:56:37.219154 | orchestrator | 2025-06-01 04:56:37.219165 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-01 04:56:37.219176 | orchestrator | Sunday 01 June 2025 04:56:10 +0000 (0:00:43.425) 0:00:53.624 *********** 2025-06-01 04:56:37.219187 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:56:37.219198 | orchestrator | 2025-06-01 04:56:37.219208 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 04:56:37.219219 | orchestrator | 2025-06-01 04:56:37.219230 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 04:56:37.219240 | orchestrator | Sunday 01 June 2025 04:56:10 +0000 (0:00:00.160) 0:00:53.785 *********** 2025-06-01 04:56:37.219262 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:56:37.219273 | orchestrator | 2025-06-01 04:56:37.219284 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 04:56:37.219294 | orchestrator | 2025-06-01 04:56:37.219305 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 04:56:37.219316 | orchestrator | Sunday 01 June 2025 04:56:11 +0000 (0:00:01.421) 0:00:55.206 *********** 2025-06-01 04:56:37.219326 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:56:37.219450 | orchestrator | 2025-06-01 04:56:37.219548 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 04:56:37.219565 | orchestrator | 2025-06-01 04:56:37.219575 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 04:56:37.219586 | orchestrator | Sunday 01 June 2025 04:56:23 +0000 (0:00:11.197) 0:01:06.404 *********** 2025-06-01 04:56:37.219597 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:56:37.219608 | orchestrator | 2025-06-01 04:56:37.219632 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:56:37.219643 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 04:56:37.219655 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.219667 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.219678 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 04:56:37.219688 | orchestrator | 2025-06-01 04:56:37.219699 | orchestrator | 2025-06-01 04:56:37.219710 | orchestrator | 2025-06-01 04:56:37.219721 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:56:37.219732 | orchestrator | Sunday 01 June 2025 04:56:34 +0000 (0:00:11.119) 0:01:17.524 *********** 2025-06-01 04:56:37.219743 | orchestrator | =============================================================================== 2025-06-01 04:56:37.219754 | orchestrator | Create admin user ------------------------------------------------------ 43.43s 2025-06-01 04:56:37.219764 | orchestrator | Restart ceph manager service ------------------------------------------- 23.74s 2025-06-01 04:56:37.219775 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.05s 2025-06-01 04:56:37.219786 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.38s 2025-06-01 04:56:37.219856 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2025-06-01 04:56:37.219868 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2025-06-01 04:56:37.219879 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.08s 2025-06-01 04:56:37.219890 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.04s 2025-06-01 04:56:37.219900 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.03s 2025-06-01 04:56:37.219911 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.92s 2025-06-01 04:56:37.219922 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-06-01 04:56:37.219933 | orchestrator | 2025-06-01 04:56:37 | INFO  | Task cf491a7c-664f-40ca-a7da-b2879599209b is in state SUCCESS 2025-06-01 04:56:37.219944 | orchestrator | 2025-06-01 04:56:37 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:37.219955 | orchestrator | 2025-06-01 04:56:37 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:37.219966 | orchestrator | 2025-06-01 04:56:37 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:37.219984 | orchestrator | 2025-06-01 04:56:37 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:37.220007 | orchestrator | 2025-06-01 04:56:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:40.248478 | orchestrator | 2025-06-01 04:56:40 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:40.248590 | orchestrator | 2025-06-01 04:56:40 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:40.249157 | orchestrator | 2025-06-01 04:56:40 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:40.249729 | orchestrator | 2025-06-01 04:56:40 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:40.249755 | orchestrator | 2025-06-01 04:56:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:43.279432 | orchestrator | 2025-06-01 04:56:43 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:43.279954 | orchestrator | 2025-06-01 04:56:43 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:43.281668 | orchestrator | 2025-06-01 04:56:43 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:43.281698 | orchestrator | 2025-06-01 04:56:43 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:43.281710 | orchestrator | 2025-06-01 04:56:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:46.316285 | orchestrator | 2025-06-01 04:56:46 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:46.316377 | orchestrator | 2025-06-01 04:56:46 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:46.321094 | orchestrator | 2025-06-01 04:56:46 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:46.321760 | orchestrator | 2025-06-01 04:56:46 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:46.321782 | orchestrator | 2025-06-01 04:56:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:49.345280 | orchestrator | 2025-06-01 04:56:49 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:49.345391 | orchestrator | 2025-06-01 04:56:49 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:49.346855 | orchestrator | 2025-06-01 04:56:49 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:49.347683 | orchestrator | 2025-06-01 04:56:49 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:49.347712 | orchestrator | 2025-06-01 04:56:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:52.376241 | orchestrator | 2025-06-01 04:56:52 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:52.376583 | orchestrator | 2025-06-01 04:56:52 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:52.377294 | orchestrator | 2025-06-01 04:56:52 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:52.377921 | orchestrator | 2025-06-01 04:56:52 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:52.381726 | orchestrator | 2025-06-01 04:56:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:55.401224 | orchestrator | 2025-06-01 04:56:55 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:55.401699 | orchestrator | 2025-06-01 04:56:55 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:55.402854 | orchestrator | 2025-06-01 04:56:55 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:55.403637 | orchestrator | 2025-06-01 04:56:55 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:55.403665 | orchestrator | 2025-06-01 04:56:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:56:58.434863 | orchestrator | 2025-06-01 04:56:58 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:56:58.435191 | orchestrator | 2025-06-01 04:56:58 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:56:58.435719 | orchestrator | 2025-06-01 04:56:58 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:56:58.436206 | orchestrator | 2025-06-01 04:56:58 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:56:58.436253 | orchestrator | 2025-06-01 04:56:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:01.468346 | orchestrator | 2025-06-01 04:57:01 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:01.472475 | orchestrator | 2025-06-01 04:57:01 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:01.472930 | orchestrator | 2025-06-01 04:57:01 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:01.473703 | orchestrator | 2025-06-01 04:57:01 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:01.473914 | orchestrator | 2025-06-01 04:57:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:04.503159 | orchestrator | 2025-06-01 04:57:04 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:04.505445 | orchestrator | 2025-06-01 04:57:04 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:04.506471 | orchestrator | 2025-06-01 04:57:04 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:04.507870 | orchestrator | 2025-06-01 04:57:04 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:04.508186 | orchestrator | 2025-06-01 04:57:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:07.546975 | orchestrator | 2025-06-01 04:57:07 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:07.548370 | orchestrator | 2025-06-01 04:57:07 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:07.550424 | orchestrator | 2025-06-01 04:57:07 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:07.552890 | orchestrator | 2025-06-01 04:57:07 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:07.552930 | orchestrator | 2025-06-01 04:57:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:10.594407 | orchestrator | 2025-06-01 04:57:10 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:10.594763 | orchestrator | 2025-06-01 04:57:10 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:10.595445 | orchestrator | 2025-06-01 04:57:10 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:10.596455 | orchestrator | 2025-06-01 04:57:10 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:10.596492 | orchestrator | 2025-06-01 04:57:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:13.633166 | orchestrator | 2025-06-01 04:57:13 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:13.634301 | orchestrator | 2025-06-01 04:57:13 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:13.634369 | orchestrator | 2025-06-01 04:57:13 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:13.634920 | orchestrator | 2025-06-01 04:57:13 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:13.634971 | orchestrator | 2025-06-01 04:57:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:16.669943 | orchestrator | 2025-06-01 04:57:16 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:16.671694 | orchestrator | 2025-06-01 04:57:16 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:16.673244 | orchestrator | 2025-06-01 04:57:16 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:16.674962 | orchestrator | 2025-06-01 04:57:16 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:16.674995 | orchestrator | 2025-06-01 04:57:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:19.719374 | orchestrator | 2025-06-01 04:57:19 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:19.721326 | orchestrator | 2025-06-01 04:57:19 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:19.723871 | orchestrator | 2025-06-01 04:57:19 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:19.725808 | orchestrator | 2025-06-01 04:57:19 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:19.725904 | orchestrator | 2025-06-01 04:57:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:22.765293 | orchestrator | 2025-06-01 04:57:22 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:22.765740 | orchestrator | 2025-06-01 04:57:22 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:22.766785 | orchestrator | 2025-06-01 04:57:22 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:22.767895 | orchestrator | 2025-06-01 04:57:22 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:22.767924 | orchestrator | 2025-06-01 04:57:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:25.812094 | orchestrator | 2025-06-01 04:57:25 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:25.812202 | orchestrator | 2025-06-01 04:57:25 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:25.812218 | orchestrator | 2025-06-01 04:57:25 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:25.814218 | orchestrator | 2025-06-01 04:57:25 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:25.814270 | orchestrator | 2025-06-01 04:57:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:28.858352 | orchestrator | 2025-06-01 04:57:28 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:28.860621 | orchestrator | 2025-06-01 04:57:28 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:28.862765 | orchestrator | 2025-06-01 04:57:28 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:28.864745 | orchestrator | 2025-06-01 04:57:28 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:28.864847 | orchestrator | 2025-06-01 04:57:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:31.907297 | orchestrator | 2025-06-01 04:57:31 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:31.907408 | orchestrator | 2025-06-01 04:57:31 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:31.909981 | orchestrator | 2025-06-01 04:57:31 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:31.911973 | orchestrator | 2025-06-01 04:57:31 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:31.912031 | orchestrator | 2025-06-01 04:57:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:34.966614 | orchestrator | 2025-06-01 04:57:34 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:34.967999 | orchestrator | 2025-06-01 04:57:34 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:34.969595 | orchestrator | 2025-06-01 04:57:34 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:34.971531 | orchestrator | 2025-06-01 04:57:34 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:34.971581 | orchestrator | 2025-06-01 04:57:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:38.015386 | orchestrator | 2025-06-01 04:57:38 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:38.016325 | orchestrator | 2025-06-01 04:57:38 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:38.017786 | orchestrator | 2025-06-01 04:57:38 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:38.019193 | orchestrator | 2025-06-01 04:57:38 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:38.019230 | orchestrator | 2025-06-01 04:57:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:41.063149 | orchestrator | 2025-06-01 04:57:41 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:41.063708 | orchestrator | 2025-06-01 04:57:41 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:41.065471 | orchestrator | 2025-06-01 04:57:41 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:41.066584 | orchestrator | 2025-06-01 04:57:41 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:41.066613 | orchestrator | 2025-06-01 04:57:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:44.107393 | orchestrator | 2025-06-01 04:57:44 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:44.107944 | orchestrator | 2025-06-01 04:57:44 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:44.108515 | orchestrator | 2025-06-01 04:57:44 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:44.109329 | orchestrator | 2025-06-01 04:57:44 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:44.109365 | orchestrator | 2025-06-01 04:57:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:47.152191 | orchestrator | 2025-06-01 04:57:47 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:47.152778 | orchestrator | 2025-06-01 04:57:47 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:47.153730 | orchestrator | 2025-06-01 04:57:47 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:47.154978 | orchestrator | 2025-06-01 04:57:47 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:47.155006 | orchestrator | 2025-06-01 04:57:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:50.180673 | orchestrator | 2025-06-01 04:57:50 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:50.183349 | orchestrator | 2025-06-01 04:57:50 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:50.183660 | orchestrator | 2025-06-01 04:57:50 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:50.185646 | orchestrator | 2025-06-01 04:57:50 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:50.185997 | orchestrator | 2025-06-01 04:57:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:53.235728 | orchestrator | 2025-06-01 04:57:53 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:53.236982 | orchestrator | 2025-06-01 04:57:53 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:53.238104 | orchestrator | 2025-06-01 04:57:53 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:53.239615 | orchestrator | 2025-06-01 04:57:53 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:53.240048 | orchestrator | 2025-06-01 04:57:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:56.278212 | orchestrator | 2025-06-01 04:57:56 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:56.280672 | orchestrator | 2025-06-01 04:57:56 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:56.280712 | orchestrator | 2025-06-01 04:57:56 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:56.280725 | orchestrator | 2025-06-01 04:57:56 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:56.280737 | orchestrator | 2025-06-01 04:57:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:57:59.316498 | orchestrator | 2025-06-01 04:57:59 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:57:59.316583 | orchestrator | 2025-06-01 04:57:59 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:57:59.316661 | orchestrator | 2025-06-01 04:57:59 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:57:59.317571 | orchestrator | 2025-06-01 04:57:59 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:57:59.317592 | orchestrator | 2025-06-01 04:57:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:02.352154 | orchestrator | 2025-06-01 04:58:02 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:58:02.354361 | orchestrator | 2025-06-01 04:58:02 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:02.357014 | orchestrator | 2025-06-01 04:58:02 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:02.359833 | orchestrator | 2025-06-01 04:58:02 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:02.359959 | orchestrator | 2025-06-01 04:58:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:05.394363 | orchestrator | 2025-06-01 04:58:05 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:58:05.395074 | orchestrator | 2025-06-01 04:58:05 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:05.396578 | orchestrator | 2025-06-01 04:58:05 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:05.397366 | orchestrator | 2025-06-01 04:58:05 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:05.397395 | orchestrator | 2025-06-01 04:58:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:08.449086 | orchestrator | 2025-06-01 04:58:08 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:58:08.449557 | orchestrator | 2025-06-01 04:58:08 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:08.450358 | orchestrator | 2025-06-01 04:58:08 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:08.451104 | orchestrator | 2025-06-01 04:58:08 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:08.451117 | orchestrator | 2025-06-01 04:58:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:11.510312 | orchestrator | 2025-06-01 04:58:11 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:58:11.511776 | orchestrator | 2025-06-01 04:58:11 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:11.514289 | orchestrator | 2025-06-01 04:58:11 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:11.515913 | orchestrator | 2025-06-01 04:58:11 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:11.515945 | orchestrator | 2025-06-01 04:58:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:14.551363 | orchestrator | 2025-06-01 04:58:14 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state STARTED 2025-06-01 04:58:14.552265 | orchestrator | 2025-06-01 04:58:14 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:14.553467 | orchestrator | 2025-06-01 04:58:14 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:14.554673 | orchestrator | 2025-06-01 04:58:14 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:14.554811 | orchestrator | 2025-06-01 04:58:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:17.603427 | orchestrator | 2025-06-01 04:58:17.603484 | orchestrator | 2025-06-01 04:58:17.603490 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:58:17.603494 | orchestrator | 2025-06-01 04:58:17.603498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:58:17.603502 | orchestrator | Sunday 01 June 2025 04:55:25 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-01 04:58:17.603506 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:58:17.603510 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:58:17.603514 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:58:17.603518 | orchestrator | 2025-06-01 04:58:17.603522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:58:17.603526 | orchestrator | Sunday 01 June 2025 04:55:25 +0000 (0:00:00.391) 0:00:00.643 *********** 2025-06-01 04:58:17.603530 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-01 04:58:17.603534 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-01 04:58:17.603538 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-01 04:58:17.603541 | orchestrator | 2025-06-01 04:58:17.603545 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-01 04:58:17.603558 | orchestrator | 2025-06-01 04:58:17.603562 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 04:58:17.603566 | orchestrator | Sunday 01 June 2025 04:55:25 +0000 (0:00:00.360) 0:00:01.003 *********** 2025-06-01 04:58:17.603570 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:58:17.603574 | orchestrator | 2025-06-01 04:58:17.603578 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-01 04:58:17.603581 | orchestrator | Sunday 01 June 2025 04:55:26 +0000 (0:00:00.812) 0:00:01.816 *********** 2025-06-01 04:58:17.603585 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-01 04:58:17.603589 | orchestrator | 2025-06-01 04:58:17.603592 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-01 04:58:17.603596 | orchestrator | Sunday 01 June 2025 04:55:30 +0000 (0:00:03.867) 0:00:05.683 *********** 2025-06-01 04:58:17.603600 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-01 04:58:17.603604 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-01 04:58:17.603608 | orchestrator | 2025-06-01 04:58:17.603611 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-01 04:58:17.603615 | orchestrator | Sunday 01 June 2025 04:55:36 +0000 (0:00:06.055) 0:00:11.738 *********** 2025-06-01 04:58:17.603619 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-01 04:58:17.603623 | orchestrator | 2025-06-01 04:58:17.603633 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-01 04:58:17.603637 | orchestrator | Sunday 01 June 2025 04:55:39 +0000 (0:00:02.869) 0:00:14.608 *********** 2025-06-01 04:58:17.603641 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 04:58:17.603645 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-01 04:58:17.603649 | orchestrator | 2025-06-01 04:58:17.603653 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-01 04:58:17.603656 | orchestrator | Sunday 01 June 2025 04:55:42 +0000 (0:00:03.401) 0:00:18.009 *********** 2025-06-01 04:58:17.603660 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 04:58:17.603664 | orchestrator | 2025-06-01 04:58:17.603668 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-01 04:58:17.603671 | orchestrator | Sunday 01 June 2025 04:55:45 +0000 (0:00:02.973) 0:00:20.983 *********** 2025-06-01 04:58:17.603675 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-01 04:58:17.603679 | orchestrator | 2025-06-01 04:58:17.603682 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-01 04:58:17.603686 | orchestrator | Sunday 01 June 2025 04:55:49 +0000 (0:00:04.115) 0:00:25.098 *********** 2025-06-01 04:58:17.603701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.603712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.603717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.603722 | orchestrator | 2025-06-01 04:58:17.603728 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 04:58:17.603732 | orchestrator | Sunday 01 June 2025 04:55:56 +0000 (0:00:06.549) 0:00:31.647 *********** 2025-06-01 04:58:17.603736 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:58:17.603740 | orchestrator | 2025-06-01 04:58:17.603746 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-01 04:58:17.603750 | orchestrator | Sunday 01 June 2025 04:55:57 +0000 (0:00:00.609) 0:00:32.257 *********** 2025-06-01 04:58:17.603754 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:17.603758 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:17.603761 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.603765 | orchestrator | 2025-06-01 04:58:17.603769 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-01 04:58:17.603773 | orchestrator | Sunday 01 June 2025 04:56:00 +0000 (0:00:03.550) 0:00:35.807 *********** 2025-06-01 04:58:17.603837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:58:17.603872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:58:17.603877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:58:17.603881 | orchestrator | 2025-06-01 04:58:17.603885 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-01 04:58:17.603889 | orchestrator | Sunday 01 June 2025 04:56:01 +0000 (0:00:01.404) 0:00:37.211 *********** 2025-06-01 04:58:17.603892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:58:17.603896 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:58:17.603900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:58:17.603904 | orchestrator | 2025-06-01 04:58:17.603908 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-01 04:58:17.603911 | orchestrator | Sunday 01 June 2025 04:56:02 +0000 (0:00:01.016) 0:00:38.228 *********** 2025-06-01 04:58:17.603915 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:58:17.603919 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:58:17.603922 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:58:17.603926 | orchestrator | 2025-06-01 04:58:17.603930 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-01 04:58:17.603934 | orchestrator | Sunday 01 June 2025 04:56:03 +0000 (0:00:00.735) 0:00:38.963 *********** 2025-06-01 04:58:17.603937 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.603941 | orchestrator | 2025-06-01 04:58:17.603945 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-01 04:58:17.603952 | orchestrator | Sunday 01 June 2025 04:56:03 +0000 (0:00:00.100) 0:00:39.064 *********** 2025-06-01 04:58:17.603961 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.603968 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.603975 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.603984 | orchestrator | 2025-06-01 04:58:17.603991 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 04:58:17.603997 | orchestrator | Sunday 01 June 2025 04:56:04 +0000 (0:00:00.271) 0:00:39.335 *********** 2025-06-01 04:58:17.604003 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 04:58:17.604009 | orchestrator | 2025-06-01 04:58:17.604016 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-01 04:58:17.604022 | orchestrator | Sunday 01 June 2025 04:56:04 +0000 (0:00:00.458) 0:00:39.794 *********** 2025-06-01 04:58:17.604034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604071 | orchestrator | 2025-06-01 04:58:17.604078 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-01 04:58:17.604084 | orchestrator | Sunday 01 June 2025 04:56:09 +0000 (0:00:04.951) 0:00:44.745 *********** 2025-06-01 04:58:17.604095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:58:17.604099 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:58:17.604117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:58:17.604121 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604125 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604129 | orchestrator | 2025-06-01 04:58:17.604133 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-01 04:58:17.604136 | orchestrator | Sunday 01 June 2025 04:56:12 +0000 (0:00:03.212) 0:00:47.957 *********** 2025-06-01 04:58:17.604142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:58:17.604152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:58:17.604157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 04:58:17.604161 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604167 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604177 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604181 | orchestrator | 2025-06-01 04:58:17.604185 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-01 04:58:17.604188 | orchestrator | Sunday 01 June 2025 04:56:15 +0000 (0:00:02.555) 0:00:50.513 *********** 2025-06-01 04:58:17.604192 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604196 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604200 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604203 | orchestrator | 2025-06-01 04:58:17.604207 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-01 04:58:17.604211 | orchestrator | Sunday 01 June 2025 04:56:17 +0000 (0:00:02.689) 0:00:53.203 *********** 2025-06-01 04:58:17.604217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604236 | orchestrator | 2025-06-01 04:58:17.604240 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-01 04:58:17.604244 | orchestrator | Sunday 01 June 2025 04:56:22 +0000 (0:00:04.034) 0:00:57.237 *********** 2025-06-01 04:58:17.604247 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604251 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:17.604255 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:17.604259 | orchestrator | 2025-06-01 04:58:17.604262 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-01 04:58:17.604266 | orchestrator | Sunday 01 June 2025 04:56:28 +0000 (0:00:06.939) 0:01:04.177 *********** 2025-06-01 04:58:17.604270 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604274 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604277 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604281 | orchestrator | 2025-06-01 04:58:17.604285 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-01 04:58:17.604291 | orchestrator | Sunday 012025-06-01 04:58:17 | INFO  | Task ad765246-2833-409c-be2f-7f041efe19be is in state SUCCESS 2025-06-01 04:58:17.604295 | orchestrator | 2025-06-01 04:58:17 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:17.604298 | orchestrator | 2025-06-01 04:58:17 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:17.604302 | orchestrator | 2025-06-01 04:58:17 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:17.604306 | orchestrator | June 2025 04:56:33 +0000 (0:00:04.507) 0:01:08.685 *********** 2025-06-01 04:58:17.604310 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604314 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604317 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604321 | orchestrator | 2025-06-01 04:58:17.604325 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-01 04:58:17.604329 | orchestrator | Sunday 01 June 2025 04:56:38 +0000 (0:00:04.669) 0:01:13.355 *********** 2025-06-01 04:58:17.604335 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604339 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604343 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604347 | orchestrator | 2025-06-01 04:58:17.604350 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-01 04:58:17.604354 | orchestrator | Sunday 01 June 2025 04:56:42 +0000 (0:00:04.332) 0:01:17.687 *********** 2025-06-01 04:58:17.604358 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604361 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604365 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604369 | orchestrator | 2025-06-01 04:58:17.604373 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-01 04:58:17.604376 | orchestrator | Sunday 01 June 2025 04:56:47 +0000 (0:00:05.203) 0:01:22.890 *********** 2025-06-01 04:58:17.604380 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604384 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604387 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604391 | orchestrator | 2025-06-01 04:58:17.604395 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-01 04:58:17.604399 | orchestrator | Sunday 01 June 2025 04:56:47 +0000 (0:00:00.269) 0:01:23.160 *********** 2025-06-01 04:58:17.604402 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-01 04:58:17.604406 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604410 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-01 04:58:17.604415 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604419 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-01 04:58:17.604423 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604427 | orchestrator | 2025-06-01 04:58:17.604431 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-01 04:58:17.604434 | orchestrator | Sunday 01 June 2025 04:56:53 +0000 (0:00:05.388) 0:01:28.548 *********** 2025-06-01 04:58:17.604438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 04:58:17.604458 | orchestrator | 2025-06-01 04:58:17.604462 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 04:58:17.604466 | orchestrator | Sunday 01 June 2025 04:56:59 +0000 (0:00:06.494) 0:01:35.043 *********** 2025-06-01 04:58:17.604470 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:17.604474 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:17.604477 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:17.604481 | orchestrator | 2025-06-01 04:58:17.604486 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-01 04:58:17.604491 | orchestrator | Sunday 01 June 2025 04:57:00 +0000 (0:00:00.409) 0:01:35.452 *********** 2025-06-01 04:58:17.604495 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604499 | orchestrator | 2025-06-01 04:58:17.604506 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-01 04:58:17.604511 | orchestrator | Sunday 01 June 2025 04:57:02 +0000 (0:00:01.869) 0:01:37.322 *********** 2025-06-01 04:58:17.604515 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604519 | orchestrator | 2025-06-01 04:58:17.604524 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-01 04:58:17.604531 | orchestrator | Sunday 01 June 2025 04:57:04 +0000 (0:00:01.982) 0:01:39.304 *********** 2025-06-01 04:58:17.604535 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604540 | orchestrator | 2025-06-01 04:58:17.604544 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-01 04:58:17.604549 | orchestrator | Sunday 01 June 2025 04:57:06 +0000 (0:00:01.935) 0:01:41.240 *********** 2025-06-01 04:58:17.604553 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604557 | orchestrator | 2025-06-01 04:58:17.604562 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-01 04:58:17.604566 | orchestrator | Sunday 01 June 2025 04:57:35 +0000 (0:00:29.585) 0:02:10.825 *********** 2025-06-01 04:58:17.604571 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604575 | orchestrator | 2025-06-01 04:58:17.604579 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-01 04:58:17.604584 | orchestrator | Sunday 01 June 2025 04:57:37 +0000 (0:00:02.363) 0:02:13.189 *********** 2025-06-01 04:58:17.604588 | orchestrator | 2025-06-01 04:58:17.604592 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-01 04:58:17.604597 | orchestrator | Sunday 01 June 2025 04:57:38 +0000 (0:00:00.063) 0:02:13.253 *********** 2025-06-01 04:58:17.604601 | orchestrator | 2025-06-01 04:58:17.604605 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-01 04:58:17.604609 | orchestrator | Sunday 01 June 2025 04:57:38 +0000 (0:00:00.061) 0:02:13.314 *********** 2025-06-01 04:58:17.604614 | orchestrator | 2025-06-01 04:58:17.604618 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-01 04:58:17.604622 | orchestrator | Sunday 01 June 2025 04:57:38 +0000 (0:00:00.063) 0:02:13.377 *********** 2025-06-01 04:58:17.604627 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:17.604631 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:17.604635 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:17.604639 | orchestrator | 2025-06-01 04:58:17.604644 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:58:17.604648 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 04:58:17.604653 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 04:58:17.604658 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 04:58:17.604662 | orchestrator | 2025-06-01 04:58:17.604666 | orchestrator | 2025-06-01 04:58:17.604671 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:58:17.604677 | orchestrator | Sunday 01 June 2025 04:58:15 +0000 (0:00:36.882) 0:02:50.260 *********** 2025-06-01 04:58:17.604681 | orchestrator | =============================================================================== 2025-06-01 04:58:17.604686 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.88s 2025-06-01 04:58:17.604690 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.59s 2025-06-01 04:58:17.604695 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.94s 2025-06-01 04:58:17.604699 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.55s 2025-06-01 04:58:17.604703 | orchestrator | glance : Check glance containers ---------------------------------------- 6.49s 2025-06-01 04:58:17.604710 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.06s 2025-06-01 04:58:17.604714 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.39s 2025-06-01 04:58:17.604718 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.20s 2025-06-01 04:58:17.604723 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.95s 2025-06-01 04:58:17.604727 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.67s 2025-06-01 04:58:17.604731 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.51s 2025-06-01 04:58:17.604736 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.33s 2025-06-01 04:58:17.604740 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.12s 2025-06-01 04:58:17.604744 | orchestrator | glance : Copying over config.json files for services -------------------- 4.03s 2025-06-01 04:58:17.604748 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.87s 2025-06-01 04:58:17.604753 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.55s 2025-06-01 04:58:17.604757 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.40s 2025-06-01 04:58:17.604761 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.21s 2025-06-01 04:58:17.604766 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 2.97s 2025-06-01 04:58:17.604770 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 2.87s 2025-06-01 04:58:17.604774 | orchestrator | 2025-06-01 04:58:17 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:17.604779 | orchestrator | 2025-06-01 04:58:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:20.630069 | orchestrator | 2025-06-01 04:58:20 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:20.630361 | orchestrator | 2025-06-01 04:58:20 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:20.631183 | orchestrator | 2025-06-01 04:58:20 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:20.634796 | orchestrator | 2025-06-01 04:58:20 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:20.634917 | orchestrator | 2025-06-01 04:58:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:23.679449 | orchestrator | 2025-06-01 04:58:23 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:23.679559 | orchestrator | 2025-06-01 04:58:23 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:23.680183 | orchestrator | 2025-06-01 04:58:23 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:23.681020 | orchestrator | 2025-06-01 04:58:23 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:23.681047 | orchestrator | 2025-06-01 04:58:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:26.725672 | orchestrator | 2025-06-01 04:58:26 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:26.727298 | orchestrator | 2025-06-01 04:58:26 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:26.732509 | orchestrator | 2025-06-01 04:58:26 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:26.735537 | orchestrator | 2025-06-01 04:58:26 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state STARTED 2025-06-01 04:58:26.735981 | orchestrator | 2025-06-01 04:58:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:29.787436 | orchestrator | 2025-06-01 04:58:29 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:29.787538 | orchestrator | 2025-06-01 04:58:29 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:29.787551 | orchestrator | 2025-06-01 04:58:29 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:29.787945 | orchestrator | 2025-06-01 04:58:29 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:29.793317 | orchestrator | 2025-06-01 04:58:29 | INFO  | Task 1d1a5c3c-bc9c-4748-8ce3-e7a890508007 is in state SUCCESS 2025-06-01 04:58:29.794656 | orchestrator | 2025-06-01 04:58:29.794675 | orchestrator | 2025-06-01 04:58:29.794680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:58:29.794685 | orchestrator | 2025-06-01 04:58:29.794690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:58:29.794695 | orchestrator | Sunday 01 June 2025 04:55:16 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-06-01 04:58:29.794699 | orchestrator | ok: [testbed-manager] 2025-06-01 04:58:29.794704 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:58:29.794709 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:58:29.794713 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:58:29.794717 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:58:29.794722 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:58:29.794726 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:58:29.794730 | orchestrator | 2025-06-01 04:58:29.794735 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:58:29.794739 | orchestrator | Sunday 01 June 2025 04:55:17 +0000 (0:00:00.944) 0:00:01.215 *********** 2025-06-01 04:58:29.794744 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794748 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794753 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794757 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794761 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794766 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794770 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-01 04:58:29.794774 | orchestrator | 2025-06-01 04:58:29.794778 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-01 04:58:29.794783 | orchestrator | 2025-06-01 04:58:29.794787 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-01 04:58:29.794791 | orchestrator | Sunday 01 June 2025 04:55:18 +0000 (0:00:00.746) 0:00:01.961 *********** 2025-06-01 04:58:29.794797 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:58:29.794804 | orchestrator | 2025-06-01 04:58:29.794808 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-01 04:58:29.794813 | orchestrator | Sunday 01 June 2025 04:55:20 +0000 (0:00:01.756) 0:00:03.718 *********** 2025-06-01 04:58:29.794819 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:58:29.794835 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:58:29.794925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794952 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.794981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.794986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.794999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795027 | orchestrator | 2025-06-01 04:58:29.795032 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-01 04:58:29.795036 | orchestrator | Sunday 01 June 2025 04:55:24 +0000 (0:00:03.933) 0:00:07.651 *********** 2025-06-01 04:58:29.795041 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:58:29.795046 | orchestrator | 2025-06-01 04:58:29.795051 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-01 04:58:29.795055 | orchestrator | Sunday 01 June 2025 04:55:25 +0000 (0:00:01.553) 0:00:09.205 *********** 2025-06-01 04:58:29.795063 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:58:29.795068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795104 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.795109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795133 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795167 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:58:29.795176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.795212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.795781 | orchestrator | 2025-06-01 04:58:29.795786 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-01 04:58:29.795791 | orchestrator | Sunday 01 June 2025 04:55:31 +0000 (0:00:05.756) 0:00:14.961 *********** 2025-06-01 04:58:29.795795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 04:58:29.795800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.795805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.795810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 04:58:29.795820 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795826 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.795831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.795838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.795895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795900 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.795904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.795909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.795932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795937 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.795941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.795946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.795960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.795964 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.795986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.795997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796006 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.796011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796024 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.796029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796051 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.796055 | orchestrator | 2025-06-01 04:58:29.796060 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-01 04:58:29.796064 | orchestrator | Sunday 01 June 2025 04:55:33 +0000 (0:00:01.624) 0:00:16.585 *********** 2025-06-01 04:58:29.796069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796094 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 04:58:29.796106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796111 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796116 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 04:58:29.796121 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796159 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.796163 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.796167 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.796172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 04:58:29.796198 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.796207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796246 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.796250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796267 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.796271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 04:58:29.796276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 04:58:29.796807 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.796812 | orchestrator | 2025-06-01 04:58:29.796816 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-01 04:58:29.796821 | orchestrator | Sunday 01 June 2025 04:55:35 +0000 (0:00:02.075) 0:00:18.661 *********** 2025-06-01 04:58:29.796825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796830 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:58:29.796835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.796895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796900 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796940 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:58:29.796948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796980 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.796993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.796998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.797002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.797007 | orchestrator | 2025-06-01 04:58:29.797011 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-01 04:58:29.797017 | orchestrator | Sunday 01 June 2025 04:55:40 +0000 (0:00:05.551) 0:00:24.212 *********** 2025-06-01 04:58:29.797022 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:58:29.797026 | orchestrator | 2025-06-01 04:58:29.797031 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-01 04:58:29.797037 | orchestrator | Sunday 01 June 2025 04:55:41 +0000 (0:00:00.736) 0:00:24.949 *********** 2025-06-01 04:58:29.797042 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797047 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797055 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797059 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797064 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797068 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797078 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797088 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797095 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097024, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6786795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797100 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797104 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797109 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797117 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797122 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797127 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797135 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797139 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097009, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6756794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797144 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797148 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797155 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797162 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797167 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797174 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797178 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797183 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797187 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797208 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797217 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096985, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797221 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797226 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797232 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797239 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797247 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797251 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797256 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797260 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797265 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797297 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797306 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797313 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797318 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797322 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797327 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797331 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096987, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797338 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797348 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797353 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797357 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797362 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797366 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797371 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797375 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797388 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797393 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797398 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797402 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797411 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797416 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097001, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797433 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797439 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797444 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797449 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797454 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797460 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797471 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797516 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797526 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096991, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797531 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797537 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797542 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797562 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797567 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797573 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797578 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797583 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797589 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797811 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097000, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6736794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797818 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797823 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797828 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797832 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797837 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797846 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797870 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797878 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797894 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797902 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797910 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097012, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6766794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.797914 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797919 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.797928 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797933 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797942 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797947 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797954 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797958 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797969 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797974 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797978 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797983 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.797987 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.797992 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097022, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798004 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798048 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798054 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798058 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798063 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798067 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798072 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798080 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 04:58:29.798084 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798089 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097044, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6836796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798093 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097017, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6776795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798104 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096989, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6706793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798109 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096994, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6726794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096983, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6696794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798118 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097003, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6746793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798125 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097043, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6826794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798130 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096993, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6716793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798134 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097029, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6796794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 04:58:29.798139 | orchestrator | 2025-06-01 04:58:29.798145 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-01 04:58:29.798150 | orchestrator | Sunday 01 June 2025 04:56:04 +0000 (0:00:23.363) 0:00:48.312 *********** 2025-06-01 04:58:29.798157 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:58:29.798161 | orchestrator | 2025-06-01 04:58:29.798166 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-01 04:58:29.798256 | orchestrator | Sunday 01 June 2025 04:56:05 +0000 (0:00:00.811) 0:00:49.123 *********** 2025-06-01 04:58:29.798261 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798271 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798279 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798284 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:58:29.798288 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798293 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798297 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798306 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798310 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798314 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798322 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798327 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798331 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798335 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798344 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798348 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798353 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798357 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798361 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798366 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798374 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798379 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798387 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798392 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798396 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798400 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798405 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798409 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-01 04:58:29.798413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 04:58:29.798417 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-01 04:58:29.798422 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 04:58:29.798426 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 04:58:29.798430 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 04:58:29.798435 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 04:58:29.798439 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 04:58:29.798443 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 04:58:29.798448 | orchestrator | 2025-06-01 04:58:29.798452 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-01 04:58:29.798456 | orchestrator | Sunday 01 June 2025 04:56:08 +0000 (0:00:02.932) 0:00:52.055 *********** 2025-06-01 04:58:29.798461 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 04:58:29.798465 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.798470 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 04:58:29.798474 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798478 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 04:58:29.798483 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 04:58:29.798487 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798491 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.798496 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 04:58:29.798500 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798504 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 04:58:29.798509 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798513 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-01 04:58:29.798520 | orchestrator | 2025-06-01 04:58:29.798527 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-01 04:58:29.798532 | orchestrator | Sunday 01 June 2025 04:56:25 +0000 (0:00:16.834) 0:01:08.890 *********** 2025-06-01 04:58:29.798539 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 04:58:29.798544 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798548 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 04:58:29.798552 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.798557 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 04:58:29.798561 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798565 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 04:58:29.798569 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.798574 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 04:58:29.798578 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798582 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 04:58:29.798587 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798591 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-01 04:58:29.798595 | orchestrator | 2025-06-01 04:58:29.798600 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-01 04:58:29.798604 | orchestrator | Sunday 01 June 2025 04:56:28 +0000 (0:00:03.295) 0:01:12.186 *********** 2025-06-01 04:58:29.798608 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 04:58:29.798613 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-01 04:58:29.798618 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 04:58:29.798622 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 04:58:29.798626 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.798631 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798635 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798639 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 04:58:29.798644 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.798648 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 04:58:29.798653 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798657 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 04:58:29.798661 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798666 | orchestrator | 2025-06-01 04:58:29.798670 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-01 04:58:29.798674 | orchestrator | Sunday 01 June 2025 04:56:31 +0000 (0:00:02.564) 0:01:14.750 *********** 2025-06-01 04:58:29.798679 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:58:29.798683 | orchestrator | 2025-06-01 04:58:29.798687 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-01 04:58:29.798691 | orchestrator | Sunday 01 June 2025 04:56:32 +0000 (0:00:00.917) 0:01:15.667 *********** 2025-06-01 04:58:29.798698 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.798703 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.798707 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798711 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.798716 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798720 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798724 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798729 | orchestrator | 2025-06-01 04:58:29.798733 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-01 04:58:29.798737 | orchestrator | Sunday 01 June 2025 04:56:32 +0000 (0:00:00.616) 0:01:16.284 *********** 2025-06-01 04:58:29.798742 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.798746 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:29.798750 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798754 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798759 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798763 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:29.798767 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:29.798772 | orchestrator | 2025-06-01 04:58:29.798776 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-01 04:58:29.798780 | orchestrator | Sunday 01 June 2025 04:56:35 +0000 (0:00:02.582) 0:01:18.866 *********** 2025-06-01 04:58:29.798784 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798789 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.798793 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798797 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.798802 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798806 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798813 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798817 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.798824 | orchestrator | skipping: [testbed-node-3] => (item=2025-06-01 04:58:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:29.798828 | orchestrator | /ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798832 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798837 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798841 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798845 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 04:58:29.798866 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798871 | orchestrator | 2025-06-01 04:58:29.798875 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-01 04:58:29.798880 | orchestrator | Sunday 01 June 2025 04:56:37 +0000 (0:00:01.718) 0:01:20.585 *********** 2025-06-01 04:58:29.798884 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 04:58:29.798888 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 04:58:29.798893 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.798898 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.798903 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 04:58:29.798908 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.798914 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-01 04:58:29.798919 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 04:58:29.798927 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 04:58:29.798932 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.798937 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.798942 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 04:58:29.798947 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.798952 | orchestrator | 2025-06-01 04:58:29.798958 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-01 04:58:29.798963 | orchestrator | Sunday 01 June 2025 04:56:39 +0000 (0:00:02.549) 0:01:23.134 *********** 2025-06-01 04:58:29.798968 | orchestrator | [WARNING]: Skipped 2025-06-01 04:58:29.798973 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-01 04:58:29.798978 | orchestrator | due to this access issue: 2025-06-01 04:58:29.798983 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-01 04:58:29.798988 | orchestrator | not a directory 2025-06-01 04:58:29.798993 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 04:58:29.798998 | orchestrator | 2025-06-01 04:58:29.799003 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-01 04:58:29.799008 | orchestrator | Sunday 01 June 2025 04:56:40 +0000 (0:00:01.166) 0:01:24.300 *********** 2025-06-01 04:58:29.799013 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.799018 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.799023 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.799028 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.799033 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.799038 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.799043 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.799048 | orchestrator | 2025-06-01 04:58:29.799053 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-01 04:58:29.799058 | orchestrator | Sunday 01 June 2025 04:56:42 +0000 (0:00:01.455) 0:01:25.755 *********** 2025-06-01 04:58:29.799063 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.799069 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:58:29.799074 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:58:29.799079 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:58:29.799083 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:58:29.799088 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:58:29.799093 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:58:29.799098 | orchestrator | 2025-06-01 04:58:29.799103 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-01 04:58:29.799108 | orchestrator | Sunday 01 June 2025 04:56:43 +0000 (0:00:01.322) 0:01:27.078 *********** 2025-06-01 04:58:29.799113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799126 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 04:58:29.799134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799139 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799161 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 04:58:29.799207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 04:58:29.799212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799241 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 04:58:29.799295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 04:58:29.799300 | orchestrator | 2025-06-01 04:58:29.799304 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-01 04:58:29.799309 | orchestrator | Sunday 01 June 2025 04:56:48 +0000 (0:00:04.754) 0:01:31.833 *********** 2025-06-01 04:58:29.799313 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 04:58:29.799317 | orchestrator | skipping: [testbed-manager] 2025-06-01 04:58:29.799322 | orchestrator | 2025-06-01 04:58:29.799326 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799330 | orchestrator | Sunday 01 June 2025 04:56:50 +0000 (0:00:02.005) 0:01:33.839 *********** 2025-06-01 04:58:29.799335 | orchestrator | 2025-06-01 04:58:29.799339 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799343 | orchestrator | Sunday 01 June 2025 04:56:50 +0000 (0:00:00.118) 0:01:33.957 *********** 2025-06-01 04:58:29.799348 | orchestrator | 2025-06-01 04:58:29.799352 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799359 | orchestrator | Sunday 01 June 2025 04:56:50 +0000 (0:00:00.070) 0:01:34.027 *********** 2025-06-01 04:58:29.799363 | orchestrator | 2025-06-01 04:58:29.799368 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799372 | orchestrator | Sunday 01 June 2025 04:56:50 +0000 (0:00:00.063) 0:01:34.091 *********** 2025-06-01 04:58:29.799376 | orchestrator | 2025-06-01 04:58:29.799381 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799385 | orchestrator | Sunday 01 June 2025 04:56:50 +0000 (0:00:00.063) 0:01:34.154 *********** 2025-06-01 04:58:29.799389 | orchestrator | 2025-06-01 04:58:29.799394 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799398 | orchestrator | Sunday 01 June 2025 04:56:51 +0000 (0:00:00.421) 0:01:34.576 *********** 2025-06-01 04:58:29.799402 | orchestrator | 2025-06-01 04:58:29.799407 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 04:58:29.799411 | orchestrator | Sunday 01 June 2025 04:56:51 +0000 (0:00:00.081) 0:01:34.657 *********** 2025-06-01 04:58:29.799415 | orchestrator | 2025-06-01 04:58:29.799422 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-01 04:58:29.799426 | orchestrator | Sunday 01 June 2025 04:56:51 +0000 (0:00:00.243) 0:01:34.901 *********** 2025-06-01 04:58:29.799431 | orchestrator | changed: [testbed-manager] 2025-06-01 04:58:29.799435 | orchestrator | 2025-06-01 04:58:29.799441 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-01 04:58:29.799446 | orchestrator | Sunday 01 June 2025 04:57:09 +0000 (0:00:17.490) 0:01:52.391 *********** 2025-06-01 04:58:29.799450 | orchestrator | changed: [testbed-manager] 2025-06-01 04:58:29.799455 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:29.799459 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:58:29.799463 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:29.799468 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:58:29.799472 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:58:29.799476 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:29.799481 | orchestrator | 2025-06-01 04:58:29.799485 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-01 04:58:29.799489 | orchestrator | Sunday 01 June 2025 04:57:23 +0000 (0:00:14.149) 0:02:06.541 *********** 2025-06-01 04:58:29.799494 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:29.799498 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:29.799502 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:29.799507 | orchestrator | 2025-06-01 04:58:29.799511 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-01 04:58:29.799516 | orchestrator | Sunday 01 June 2025 04:57:29 +0000 (0:00:05.851) 0:02:12.392 *********** 2025-06-01 04:58:29.799520 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:29.799524 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:29.799528 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:29.799533 | orchestrator | 2025-06-01 04:58:29.799537 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-01 04:58:29.799542 | orchestrator | Sunday 01 June 2025 04:57:39 +0000 (0:00:10.825) 0:02:23.218 *********** 2025-06-01 04:58:29.799546 | orchestrator | changed: [testbed-manager] 2025-06-01 04:58:29.799550 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:29.799554 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:58:29.799559 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:58:29.799563 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:58:29.799567 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:29.799572 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:29.799576 | orchestrator | 2025-06-01 04:58:29.799580 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-01 04:58:29.799585 | orchestrator | Sunday 01 June 2025 04:57:56 +0000 (0:00:16.283) 0:02:39.502 *********** 2025-06-01 04:58:29.799589 | orchestrator | changed: [testbed-manager] 2025-06-01 04:58:29.799597 | orchestrator | 2025-06-01 04:58:29.799601 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-01 04:58:29.799606 | orchestrator | Sunday 01 June 2025 04:58:04 +0000 (0:00:07.931) 0:02:47.434 *********** 2025-06-01 04:58:29.799610 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:58:29.799614 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:58:29.799619 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:58:29.799623 | orchestrator | 2025-06-01 04:58:29.799627 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-01 04:58:29.799632 | orchestrator | Sunday 01 June 2025 04:58:09 +0000 (0:00:05.124) 0:02:52.558 *********** 2025-06-01 04:58:29.799636 | orchestrator | changed: [testbed-manager] 2025-06-01 04:58:29.799640 | orchestrator | 2025-06-01 04:58:29.799645 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-01 04:58:29.799649 | orchestrator | Sunday 01 June 2025 04:58:15 +0000 (0:00:06.423) 0:02:58.982 *********** 2025-06-01 04:58:29.799653 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:58:29.799658 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:58:29.799662 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:58:29.799666 | orchestrator | 2025-06-01 04:58:29.799671 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:58:29.799675 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 04:58:29.799680 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 04:58:29.799684 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 04:58:29.799689 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 04:58:29.799693 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 04:58:29.799697 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 04:58:29.799702 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 04:58:29.799706 | orchestrator | 2025-06-01 04:58:29.799710 | orchestrator | 2025-06-01 04:58:29.799715 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:58:29.799719 | orchestrator | Sunday 01 June 2025 04:58:27 +0000 (0:00:12.175) 0:03:11.157 *********** 2025-06-01 04:58:29.799723 | orchestrator | =============================================================================== 2025-06-01 04:58:29.799728 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.36s 2025-06-01 04:58:29.799734 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.49s 2025-06-01 04:58:29.799739 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.83s 2025-06-01 04:58:29.799745 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.28s 2025-06-01 04:58:29.799750 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.15s 2025-06-01 04:58:29.799754 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.18s 2025-06-01 04:58:29.799758 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.83s 2025-06-01 04:58:29.799763 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.93s 2025-06-01 04:58:29.799767 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.42s 2025-06-01 04:58:29.799774 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.85s 2025-06-01 04:58:29.799779 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.76s 2025-06-01 04:58:29.799783 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.55s 2025-06-01 04:58:29.799787 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.12s 2025-06-01 04:58:29.799792 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.75s 2025-06-01 04:58:29.799796 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.93s 2025-06-01 04:58:29.799800 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.30s 2025-06-01 04:58:29.799805 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.93s 2025-06-01 04:58:29.799809 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.58s 2025-06-01 04:58:29.799813 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.56s 2025-06-01 04:58:29.799818 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.55s 2025-06-01 04:58:32.842560 | orchestrator | 2025-06-01 04:58:32 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:32.843173 | orchestrator | 2025-06-01 04:58:32 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:32.846240 | orchestrator | 2025-06-01 04:58:32 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:32.847074 | orchestrator | 2025-06-01 04:58:32 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:32.847100 | orchestrator | 2025-06-01 04:58:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:35.886761 | orchestrator | 2025-06-01 04:58:35 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:35.887059 | orchestrator | 2025-06-01 04:58:35 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:35.888218 | orchestrator | 2025-06-01 04:58:35 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:35.889439 | orchestrator | 2025-06-01 04:58:35 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:35.889485 | orchestrator | 2025-06-01 04:58:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:38.939599 | orchestrator | 2025-06-01 04:58:38 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:38.940992 | orchestrator | 2025-06-01 04:58:38 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:38.942586 | orchestrator | 2025-06-01 04:58:38 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:38.944469 | orchestrator | 2025-06-01 04:58:38 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:38.944498 | orchestrator | 2025-06-01 04:58:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:41.997056 | orchestrator | 2025-06-01 04:58:41 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:41.998576 | orchestrator | 2025-06-01 04:58:41 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:42.002463 | orchestrator | 2025-06-01 04:58:42 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:42.006902 | orchestrator | 2025-06-01 04:58:42 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:42.006982 | orchestrator | 2025-06-01 04:58:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:45.050487 | orchestrator | 2025-06-01 04:58:45 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:45.051897 | orchestrator | 2025-06-01 04:58:45 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:45.054255 | orchestrator | 2025-06-01 04:58:45 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:45.056234 | orchestrator | 2025-06-01 04:58:45 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:45.056271 | orchestrator | 2025-06-01 04:58:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:48.098654 | orchestrator | 2025-06-01 04:58:48 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:48.100117 | orchestrator | 2025-06-01 04:58:48 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:48.101944 | orchestrator | 2025-06-01 04:58:48 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:48.103579 | orchestrator | 2025-06-01 04:58:48 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:48.103618 | orchestrator | 2025-06-01 04:58:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:51.144372 | orchestrator | 2025-06-01 04:58:51 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:51.146301 | orchestrator | 2025-06-01 04:58:51 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:51.147582 | orchestrator | 2025-06-01 04:58:51 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:51.149277 | orchestrator | 2025-06-01 04:58:51 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:51.149291 | orchestrator | 2025-06-01 04:58:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:54.196943 | orchestrator | 2025-06-01 04:58:54 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:54.197040 | orchestrator | 2025-06-01 04:58:54 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:54.197517 | orchestrator | 2025-06-01 04:58:54 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:54.198605 | orchestrator | 2025-06-01 04:58:54 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:54.198643 | orchestrator | 2025-06-01 04:58:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:58:57.257553 | orchestrator | 2025-06-01 04:58:57 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:58:57.257666 | orchestrator | 2025-06-01 04:58:57 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:58:57.258830 | orchestrator | 2025-06-01 04:58:57 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:58:57.262000 | orchestrator | 2025-06-01 04:58:57 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:58:57.262111 | orchestrator | 2025-06-01 04:58:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:00.310995 | orchestrator | 2025-06-01 04:59:00 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:00.313141 | orchestrator | 2025-06-01 04:59:00 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:00.313942 | orchestrator | 2025-06-01 04:59:00 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:00.317724 | orchestrator | 2025-06-01 04:59:00 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:00.317796 | orchestrator | 2025-06-01 04:59:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:03.356628 | orchestrator | 2025-06-01 04:59:03 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:03.356762 | orchestrator | 2025-06-01 04:59:03 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:03.356788 | orchestrator | 2025-06-01 04:59:03 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:03.357615 | orchestrator | 2025-06-01 04:59:03 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:03.357656 | orchestrator | 2025-06-01 04:59:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:06.416505 | orchestrator | 2025-06-01 04:59:06 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:06.417450 | orchestrator | 2025-06-01 04:59:06 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:06.418942 | orchestrator | 2025-06-01 04:59:06 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:06.420389 | orchestrator | 2025-06-01 04:59:06 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:06.420412 | orchestrator | 2025-06-01 04:59:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:09.449651 | orchestrator | 2025-06-01 04:59:09 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:09.453345 | orchestrator | 2025-06-01 04:59:09 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:09.453395 | orchestrator | 2025-06-01 04:59:09 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:09.454752 | orchestrator | 2025-06-01 04:59:09 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:09.454771 | orchestrator | 2025-06-01 04:59:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:12.479911 | orchestrator | 2025-06-01 04:59:12 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:12.480000 | orchestrator | 2025-06-01 04:59:12 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:12.480258 | orchestrator | 2025-06-01 04:59:12 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:12.480830 | orchestrator | 2025-06-01 04:59:12 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:12.480843 | orchestrator | 2025-06-01 04:59:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:15.508463 | orchestrator | 2025-06-01 04:59:15 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:15.508581 | orchestrator | 2025-06-01 04:59:15 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:15.509925 | orchestrator | 2025-06-01 04:59:15 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:15.510534 | orchestrator | 2025-06-01 04:59:15 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:15.510575 | orchestrator | 2025-06-01 04:59:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:18.540388 | orchestrator | 2025-06-01 04:59:18 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:18.540690 | orchestrator | 2025-06-01 04:59:18 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:18.541374 | orchestrator | 2025-06-01 04:59:18 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:18.542326 | orchestrator | 2025-06-01 04:59:18 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:18.542357 | orchestrator | 2025-06-01 04:59:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:21.570661 | orchestrator | 2025-06-01 04:59:21 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:21.570801 | orchestrator | 2025-06-01 04:59:21 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:21.571313 | orchestrator | 2025-06-01 04:59:21 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:21.571938 | orchestrator | 2025-06-01 04:59:21 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:21.571947 | orchestrator | 2025-06-01 04:59:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:24.615502 | orchestrator | 2025-06-01 04:59:24 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:24.616663 | orchestrator | 2025-06-01 04:59:24 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:24.618756 | orchestrator | 2025-06-01 04:59:24 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:24.619275 | orchestrator | 2025-06-01 04:59:24 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:24.619358 | orchestrator | 2025-06-01 04:59:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:27.656253 | orchestrator | 2025-06-01 04:59:27 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:27.656354 | orchestrator | 2025-06-01 04:59:27 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:27.657130 | orchestrator | 2025-06-01 04:59:27 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:27.658292 | orchestrator | 2025-06-01 04:59:27 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:27.658387 | orchestrator | 2025-06-01 04:59:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:30.696107 | orchestrator | 2025-06-01 04:59:30 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:30.697239 | orchestrator | 2025-06-01 04:59:30 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:30.698743 | orchestrator | 2025-06-01 04:59:30 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:30.700171 | orchestrator | 2025-06-01 04:59:30 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:30.701313 | orchestrator | 2025-06-01 04:59:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:33.729742 | orchestrator | 2025-06-01 04:59:33 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:33.729936 | orchestrator | 2025-06-01 04:59:33 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:33.730328 | orchestrator | 2025-06-01 04:59:33 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:33.731586 | orchestrator | 2025-06-01 04:59:33 | INFO  | Task 42ae011f-039b-4a92-9267-4e2aa1f8d4d8 is in state STARTED 2025-06-01 04:59:33.731609 | orchestrator | 2025-06-01 04:59:33 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:33.731658 | orchestrator | 2025-06-01 04:59:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:36.757604 | orchestrator | 2025-06-01 04:59:36 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:36.759114 | orchestrator | 2025-06-01 04:59:36 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:36.760023 | orchestrator | 2025-06-01 04:59:36 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:36.760513 | orchestrator | 2025-06-01 04:59:36 | INFO  | Task 42ae011f-039b-4a92-9267-4e2aa1f8d4d8 is in state STARTED 2025-06-01 04:59:36.761908 | orchestrator | 2025-06-01 04:59:36 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:36.762617 | orchestrator | 2025-06-01 04:59:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:39.791865 | orchestrator | 2025-06-01 04:59:39 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:39.795251 | orchestrator | 2025-06-01 04:59:39 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:39.795309 | orchestrator | 2025-06-01 04:59:39 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state STARTED 2025-06-01 04:59:39.795322 | orchestrator | 2025-06-01 04:59:39 | INFO  | Task 42ae011f-039b-4a92-9267-4e2aa1f8d4d8 is in state STARTED 2025-06-01 04:59:39.795342 | orchestrator | 2025-06-01 04:59:39 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:39.795354 | orchestrator | 2025-06-01 04:59:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:42.843288 | orchestrator | 2025-06-01 04:59:42 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:42.843501 | orchestrator | 2025-06-01 04:59:42 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:42.845960 | orchestrator | 2025-06-01 04:59:42 | INFO  | Task 58165b3f-5d89-4dea-a885-2bf4c88e2e00 is in state SUCCESS 2025-06-01 04:59:42.847825 | orchestrator | 2025-06-01 04:59:42.847922 | orchestrator | 2025-06-01 04:59:42.847989 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 04:59:42.848003 | orchestrator | 2025-06-01 04:59:42.848012 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 04:59:42.848032 | orchestrator | Sunday 01 June 2025 04:55:47 +0000 (0:00:00.436) 0:00:00.436 *********** 2025-06-01 04:59:42.848041 | orchestrator | ok: [testbed-node-0] 2025-06-01 04:59:42.848052 | orchestrator | ok: [testbed-node-1] 2025-06-01 04:59:42.848088 | orchestrator | ok: [testbed-node-2] 2025-06-01 04:59:42.848098 | orchestrator | ok: [testbed-node-3] 2025-06-01 04:59:42.848106 | orchestrator | ok: [testbed-node-4] 2025-06-01 04:59:42.848136 | orchestrator | ok: [testbed-node-5] 2025-06-01 04:59:42.848146 | orchestrator | 2025-06-01 04:59:42.848155 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 04:59:42.848164 | orchestrator | Sunday 01 June 2025 04:55:48 +0000 (0:00:01.078) 0:00:01.514 *********** 2025-06-01 04:59:42.848173 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-01 04:59:42.848182 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-01 04:59:42.848191 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-01 04:59:42.848199 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-01 04:59:42.848208 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-01 04:59:42.848231 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-01 04:59:42.848243 | orchestrator | 2025-06-01 04:59:42.848258 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-01 04:59:42.848272 | orchestrator | 2025-06-01 04:59:42.848286 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 04:59:42.848324 | orchestrator | Sunday 01 June 2025 04:55:49 +0000 (0:00:00.672) 0:00:02.187 *********** 2025-06-01 04:59:42.848339 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:59:42.848353 | orchestrator | 2025-06-01 04:59:42.848367 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-01 04:59:42.848381 | orchestrator | Sunday 01 June 2025 04:55:52 +0000 (0:00:02.918) 0:00:05.106 *********** 2025-06-01 04:59:42.848396 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-01 04:59:42.848411 | orchestrator | 2025-06-01 04:59:42.848425 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-01 04:59:42.848439 | orchestrator | Sunday 01 June 2025 04:55:56 +0000 (0:00:03.475) 0:00:08.582 *********** 2025-06-01 04:59:42.848455 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-01 04:59:42.848471 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-01 04:59:42.848486 | orchestrator | 2025-06-01 04:59:42.848513 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-01 04:59:42.848529 | orchestrator | Sunday 01 June 2025 04:56:01 +0000 (0:00:05.273) 0:00:13.855 *********** 2025-06-01 04:59:42.848545 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 04:59:42.848559 | orchestrator | 2025-06-01 04:59:42.848572 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-01 04:59:42.848585 | orchestrator | Sunday 01 June 2025 04:56:04 +0000 (0:00:02.826) 0:00:16.682 *********** 2025-06-01 04:59:42.848598 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 04:59:42.848612 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-01 04:59:42.848654 | orchestrator | 2025-06-01 04:59:42.848668 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-01 04:59:42.848682 | orchestrator | Sunday 01 June 2025 04:56:07 +0000 (0:00:03.594) 0:00:20.277 *********** 2025-06-01 04:59:42.848696 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 04:59:42.848710 | orchestrator | 2025-06-01 04:59:42.848723 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-01 04:59:42.848737 | orchestrator | Sunday 01 June 2025 04:56:10 +0000 (0:00:03.066) 0:00:23.343 *********** 2025-06-01 04:59:42.848751 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-01 04:59:42.848765 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-01 04:59:42.848780 | orchestrator | 2025-06-01 04:59:42.848794 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-01 04:59:42.848809 | orchestrator | Sunday 01 June 2025 04:56:18 +0000 (0:00:07.534) 0:00:30.878 *********** 2025-06-01 04:59:42.848826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.848870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.848980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.848993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.849112 | orchestrator | 2025-06-01 04:59:42.849134 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 04:59:42.849149 | orchestrator | Sunday 01 June 2025 04:56:20 +0000 (0:00:02.274) 0:00:33.152 *********** 2025-06-01 04:59:42.849163 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.849177 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.849191 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.849206 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.849220 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.849234 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.849249 | orchestrator | 2025-06-01 04:59:42.849259 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 04:59:42.849268 | orchestrator | Sunday 01 June 2025 04:56:21 +0000 (0:00:00.472) 0:00:33.624 *********** 2025-06-01 04:59:42.849276 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.849285 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.849293 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.849302 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:59:42.849311 | orchestrator | 2025-06-01 04:59:42.849319 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-01 04:59:42.849328 | orchestrator | Sunday 01 June 2025 04:56:21 +0000 (0:00:00.686) 0:00:34.311 *********** 2025-06-01 04:59:42.849342 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-01 04:59:42.849351 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-01 04:59:42.849359 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-01 04:59:42.849368 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-01 04:59:42.849377 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-01 04:59:42.849385 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-01 04:59:42.849394 | orchestrator | 2025-06-01 04:59:42.849403 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-01 04:59:42.849411 | orchestrator | Sunday 01 June 2025 04:56:23 +0000 (0:00:01.937) 0:00:36.249 *********** 2025-06-01 04:59:42.849421 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 04:59:42.849431 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 04:59:42.849449 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 04:59:42.849465 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 04:59:42.849479 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 04:59:42.849488 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 04:59:42.849498 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 04:59:42.849512 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 04:59:42.849527 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 04:59:42.849541 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 04:59:42.849551 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 04:59:42.849560 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 04:59:42.849574 | orchestrator | 2025-06-01 04:59:42.849583 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-01 04:59:42.849592 | orchestrator | Sunday 01 June 2025 04:56:27 +0000 (0:00:04.146) 0:00:40.395 *********** 2025-06-01 04:59:42.849601 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:59:42.849610 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:59:42.849619 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 04:59:42.849628 | orchestrator | 2025-06-01 04:59:42.849636 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-01 04:59:42.849645 | orchestrator | Sunday 01 June 2025 04:56:29 +0000 (0:00:01.819) 0:00:42.215 *********** 2025-06-01 04:59:42.849654 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-01 04:59:42.849662 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-01 04:59:42.849671 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-01 04:59:42.849679 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 04:59:42.849688 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 04:59:42.849701 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 04:59:42.849711 | orchestrator | 2025-06-01 04:59:42.849719 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-01 04:59:42.849728 | orchestrator | Sunday 01 June 2025 04:56:33 +0000 (0:00:03.688) 0:00:45.903 *********** 2025-06-01 04:59:42.849736 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-01 04:59:42.849745 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-01 04:59:42.849754 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-01 04:59:42.849762 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-01 04:59:42.849771 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-01 04:59:42.849779 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-01 04:59:42.849788 | orchestrator | 2025-06-01 04:59:42.849797 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-01 04:59:42.849805 | orchestrator | Sunday 01 June 2025 04:56:34 +0000 (0:00:01.269) 0:00:47.173 *********** 2025-06-01 04:59:42.849814 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.849822 | orchestrator | 2025-06-01 04:59:42.849831 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-01 04:59:42.849839 | orchestrator | Sunday 01 June 2025 04:56:34 +0000 (0:00:00.172) 0:00:47.345 *********** 2025-06-01 04:59:42.849848 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.849861 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.849870 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.849902 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.849911 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.849920 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.849929 | orchestrator | 2025-06-01 04:59:42.849937 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 04:59:42.849946 | orchestrator | Sunday 01 June 2025 04:56:35 +0000 (0:00:00.866) 0:00:48.212 *********** 2025-06-01 04:59:42.849956 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 04:59:42.849966 | orchestrator | 2025-06-01 04:59:42.849974 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-01 04:59:42.849994 | orchestrator | Sunday 01 June 2025 04:56:37 +0000 (0:00:02.093) 0:00:50.305 *********** 2025-06-01 04:59:42.850003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.850013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.850065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.850079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.850753 | orchestrator | 2025-06-01 04:59:42.850761 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-01 04:59:42.850768 | orchestrator | Sunday 01 June 2025 04:56:41 +0000 (0:00:03.470) 0:00:53.776 *********** 2025-06-01 04:59:42.850776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.850792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.850815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850822 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.850830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.850836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850843 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.850849 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.850856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850906 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.850917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850935 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.850941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.850954 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.850961 | orchestrator | 2025-06-01 04:59:42.850967 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-01 04:59:42.850973 | orchestrator | Sunday 01 June 2025 04:56:42 +0000 (0:00:01.411) 0:00:55.187 *********** 2025-06-01 04:59:42.850983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.850999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.851013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.851030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851037 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.851048 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.851054 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.851064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851077 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.851083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851097 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.851108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851129 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.851135 | orchestrator | 2025-06-01 04:59:42.851142 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-01 04:59:42.851148 | orchestrator | Sunday 01 June 2025 04:56:45 +0000 (0:00:02.605) 0:00:57.796 *********** 2025-06-01 04:59:42.851155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851269 | orchestrator | 2025-06-01 04:59:42.851277 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-01 04:59:42.851284 | orchestrator | Sunday 01 June 2025 04:56:48 +0000 (0:00:03.441) 0:01:01.237 *********** 2025-06-01 04:59:42.851292 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-01 04:59:42.851299 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.851307 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-01 04:59:42.851314 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.851321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-01 04:59:42.851329 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-01 04:59:42.851336 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.851343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-01 04:59:42.851350 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-01 04:59:42.851357 | orchestrator | 2025-06-01 04:59:42.851364 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-01 04:59:42.851371 | orchestrator | Sunday 01 June 2025 04:56:51 +0000 (0:00:02.445) 0:01:03.682 *********** 2025-06-01 04:59:42.851379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851499 | orchestrator | 2025-06-01 04:59:42.851506 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-01 04:59:42.851513 | orchestrator | Sunday 01 June 2025 04:57:01 +0000 (0:00:10.009) 0:01:13.691 *********** 2025-06-01 04:59:42.851524 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.851532 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.851539 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.851546 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:59:42.851553 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:59:42.851561 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:59:42.851568 | orchestrator | 2025-06-01 04:59:42.851575 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-01 04:59:42.851583 | orchestrator | Sunday 01 June 2025 04:57:03 +0000 (0:00:02.010) 0:01:15.702 *********** 2025-06-01 04:59:42.851594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.851603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851609 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.851616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.851627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851634 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.851644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851657 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.851667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 04:59:42.851673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851684 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.851690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851704 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.851714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 04:59:42.851730 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.851737 | orchestrator | 2025-06-01 04:59:42.851743 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-01 04:59:42.851749 | orchestrator | Sunday 01 June 2025 04:57:04 +0000 (0:00:01.010) 0:01:16.712 *********** 2025-06-01 04:59:42.851756 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.851762 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.851775 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.851781 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.851788 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.851794 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.851800 | orchestrator | 2025-06-01 04:59:42.851806 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-01 04:59:42.851813 | orchestrator | Sunday 01 June 2025 04:57:04 +0000 (0:00:00.681) 0:01:17.393 *********** 2025-06-01 04:59:42.851819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 04:59:42.851857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 04:59:42.851942 | orchestrator | 2025-06-01 04:59:42.851948 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 04:59:42.851954 | orchestrator | Sunday 01 June 2025 04:57:07 +0000 (0:00:02.188) 0:01:19.581 *********** 2025-06-01 04:59:42.851961 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.851967 | orchestrator | skipping: [testbed-node-1] 2025-06-01 04:59:42.851973 | orchestrator | skipping: [testbed-node-2] 2025-06-01 04:59:42.851980 | orchestrator | skipping: [testbed-node-3] 2025-06-01 04:59:42.851986 | orchestrator | skipping: [testbed-node-4] 2025-06-01 04:59:42.851992 | orchestrator | skipping: [testbed-node-5] 2025-06-01 04:59:42.851998 | orchestrator | 2025-06-01 04:59:42.852004 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-01 04:59:42.852010 | orchestrator | Sunday 01 June 2025 04:57:07 +0000 (0:00:00.871) 0:01:20.453 *********** 2025-06-01 04:59:42.852017 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:59:42.852023 | orchestrator | 2025-06-01 04:59:42.852029 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-01 04:59:42.852035 | orchestrator | Sunday 01 June 2025 04:57:10 +0000 (0:00:02.227) 0:01:22.680 *********** 2025-06-01 04:59:42.852042 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:59:42.852048 | orchestrator | 2025-06-01 04:59:42.852054 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-01 04:59:42.852060 | orchestrator | Sunday 01 June 2025 04:57:12 +0000 (0:00:02.410) 0:01:25.091 *********** 2025-06-01 04:59:42.852066 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:59:42.852072 | orchestrator | 2025-06-01 04:59:42.852078 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 04:59:42.852085 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:18.543) 0:01:43.635 *********** 2025-06-01 04:59:42.852091 | orchestrator | 2025-06-01 04:59:42.852100 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 04:59:42.852107 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:00.064) 0:01:43.700 *********** 2025-06-01 04:59:42.852113 | orchestrator | 2025-06-01 04:59:42.852119 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 04:59:42.852126 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:00.064) 0:01:43.764 *********** 2025-06-01 04:59:42.852137 | orchestrator | 2025-06-01 04:59:42.852143 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 04:59:42.852149 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:00.064) 0:01:43.829 *********** 2025-06-01 04:59:42.852156 | orchestrator | 2025-06-01 04:59:42.852162 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 04:59:42.852168 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:00.081) 0:01:43.910 *********** 2025-06-01 04:59:42.852174 | orchestrator | 2025-06-01 04:59:42.852181 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 04:59:42.852187 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:00.065) 0:01:43.976 *********** 2025-06-01 04:59:42.852193 | orchestrator | 2025-06-01 04:59:42.852199 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-01 04:59:42.852208 | orchestrator | Sunday 01 June 2025 04:57:31 +0000 (0:00:00.065) 0:01:44.041 *********** 2025-06-01 04:59:42.852215 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:59:42.852221 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:59:42.852227 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:59:42.852233 | orchestrator | 2025-06-01 04:59:42.852239 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-01 04:59:42.852245 | orchestrator | Sunday 01 June 2025 04:57:54 +0000 (0:00:23.267) 0:02:07.309 *********** 2025-06-01 04:59:42.852252 | orchestrator | changed: [testbed-node-1] 2025-06-01 04:59:42.852258 | orchestrator | changed: [testbed-node-0] 2025-06-01 04:59:42.852264 | orchestrator | changed: [testbed-node-2] 2025-06-01 04:59:42.852270 | orchestrator | 2025-06-01 04:59:42.852276 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-01 04:59:42.852283 | orchestrator | Sunday 01 June 2025 04:58:05 +0000 (0:00:11.002) 0:02:18.312 *********** 2025-06-01 04:59:42.852289 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:59:42.852295 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:59:42.852301 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:59:42.852307 | orchestrator | 2025-06-01 04:59:42.852313 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-01 04:59:42.852319 | orchestrator | Sunday 01 June 2025 04:59:30 +0000 (0:01:24.433) 0:03:42.745 *********** 2025-06-01 04:59:42.852326 | orchestrator | changed: [testbed-node-4] 2025-06-01 04:59:42.852332 | orchestrator | changed: [testbed-node-3] 2025-06-01 04:59:42.852338 | orchestrator | changed: [testbed-node-5] 2025-06-01 04:59:42.852344 | orchestrator | 2025-06-01 04:59:42.852350 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-01 04:59:42.852357 | orchestrator | Sunday 01 June 2025 04:59:39 +0000 (0:00:09.449) 0:03:52.194 *********** 2025-06-01 04:59:42.852363 | orchestrator | skipping: [testbed-node-0] 2025-06-01 04:59:42.852369 | orchestrator | 2025-06-01 04:59:42.852375 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 04:59:42.852381 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 04:59:42.852388 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 04:59:42.852395 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 04:59:42.852401 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 04:59:42.852407 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 04:59:42.852413 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 04:59:42.852423 | orchestrator | 2025-06-01 04:59:42.852429 | orchestrator | 2025-06-01 04:59:42.852436 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 04:59:42.852442 | orchestrator | Sunday 01 June 2025 04:59:41 +0000 (0:00:01.562) 0:03:53.757 *********** 2025-06-01 04:59:42.852448 | orchestrator | =============================================================================== 2025-06-01 04:59:42.852454 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 84.43s 2025-06-01 04:59:42.852460 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.27s 2025-06-01 04:59:42.852467 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.54s 2025-06-01 04:59:42.852473 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.00s 2025-06-01 04:59:42.852479 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.01s 2025-06-01 04:59:42.852485 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.45s 2025-06-01 04:59:42.852491 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.53s 2025-06-01 04:59:42.852497 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.27s 2025-06-01 04:59:42.852537 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.15s 2025-06-01 04:59:42.852545 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.69s 2025-06-01 04:59:42.852551 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.60s 2025-06-01 04:59:42.852557 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.48s 2025-06-01 04:59:42.852564 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.47s 2025-06-01 04:59:42.852570 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.44s 2025-06-01 04:59:42.852576 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.07s 2025-06-01 04:59:42.852582 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.92s 2025-06-01 04:59:42.852588 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.83s 2025-06-01 04:59:42.852594 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.61s 2025-06-01 04:59:42.852601 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.45s 2025-06-01 04:59:42.852607 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.41s 2025-06-01 04:59:42.852616 | orchestrator | 2025-06-01 04:59:42 | INFO  | Task 42ae011f-039b-4a92-9267-4e2aa1f8d4d8 is in state STARTED 2025-06-01 04:59:42.852623 | orchestrator | 2025-06-01 04:59:42 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:42.852629 | orchestrator | 2025-06-01 04:59:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:45.895683 | orchestrator | 2025-06-01 04:59:45 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:45.895765 | orchestrator | 2025-06-01 04:59:45 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:45.896226 | orchestrator | 2025-06-01 04:59:45 | INFO  | Task 42ae011f-039b-4a92-9267-4e2aa1f8d4d8 is in state STARTED 2025-06-01 04:59:45.901673 | orchestrator | 2025-06-01 04:59:45 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:45.901749 | orchestrator | 2025-06-01 04:59:45 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 04:59:45.901765 | orchestrator | 2025-06-01 04:59:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:48.933202 | orchestrator | 2025-06-01 04:59:48 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:48.933331 | orchestrator | 2025-06-01 04:59:48 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:48.933644 | orchestrator | 2025-06-01 04:59:48 | INFO  | Task 42ae011f-039b-4a92-9267-4e2aa1f8d4d8 is in state SUCCESS 2025-06-01 04:59:48.934172 | orchestrator | 2025-06-01 04:59:48 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:48.934570 | orchestrator | 2025-06-01 04:59:48 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 04:59:48.934596 | orchestrator | 2025-06-01 04:59:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:51.956561 | orchestrator | 2025-06-01 04:59:51 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:51.957102 | orchestrator | 2025-06-01 04:59:51 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:51.957626 | orchestrator | 2025-06-01 04:59:51 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:51.958449 | orchestrator | 2025-06-01 04:59:51 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 04:59:51.958479 | orchestrator | 2025-06-01 04:59:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:54.985509 | orchestrator | 2025-06-01 04:59:54 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:54.988181 | orchestrator | 2025-06-01 04:59:54 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:54.991012 | orchestrator | 2025-06-01 04:59:54 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:54.991042 | orchestrator | 2025-06-01 04:59:54 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 04:59:54.991050 | orchestrator | 2025-06-01 04:59:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 04:59:58.020002 | orchestrator | 2025-06-01 04:59:58 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 04:59:58.020603 | orchestrator | 2025-06-01 04:59:58 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 04:59:58.022141 | orchestrator | 2025-06-01 04:59:58 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 04:59:58.022959 | orchestrator | 2025-06-01 04:59:58 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 04:59:58.022969 | orchestrator | 2025-06-01 04:59:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:01.051253 | orchestrator | 2025-06-01 05:00:01 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:01.052384 | orchestrator | 2025-06-01 05:00:01 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:01.052434 | orchestrator | 2025-06-01 05:00:01 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:01.053139 | orchestrator | 2025-06-01 05:00:01 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:01.053168 | orchestrator | 2025-06-01 05:00:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:04.080548 | orchestrator | 2025-06-01 05:00:04 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:04.080736 | orchestrator | 2025-06-01 05:00:04 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:04.083507 | orchestrator | 2025-06-01 05:00:04 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:04.084319 | orchestrator | 2025-06-01 05:00:04 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:04.084496 | orchestrator | 2025-06-01 05:00:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:07.119757 | orchestrator | 2025-06-01 05:00:07 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:07.121597 | orchestrator | 2025-06-01 05:00:07 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:07.122233 | orchestrator | 2025-06-01 05:00:07 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:07.123272 | orchestrator | 2025-06-01 05:00:07 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:07.123315 | orchestrator | 2025-06-01 05:00:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:10.148088 | orchestrator | 2025-06-01 05:00:10 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:10.148430 | orchestrator | 2025-06-01 05:00:10 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:10.149074 | orchestrator | 2025-06-01 05:00:10 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:10.150271 | orchestrator | 2025-06-01 05:00:10 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:10.150317 | orchestrator | 2025-06-01 05:00:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:13.195735 | orchestrator | 2025-06-01 05:00:13 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:13.197991 | orchestrator | 2025-06-01 05:00:13 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:13.200045 | orchestrator | 2025-06-01 05:00:13 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:13.201834 | orchestrator | 2025-06-01 05:00:13 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:13.202121 | orchestrator | 2025-06-01 05:00:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:16.239412 | orchestrator | 2025-06-01 05:00:16 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:16.239942 | orchestrator | 2025-06-01 05:00:16 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:16.240765 | orchestrator | 2025-06-01 05:00:16 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:16.241663 | orchestrator | 2025-06-01 05:00:16 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:16.241713 | orchestrator | 2025-06-01 05:00:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:19.263329 | orchestrator | 2025-06-01 05:00:19 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:19.263614 | orchestrator | 2025-06-01 05:00:19 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:19.264342 | orchestrator | 2025-06-01 05:00:19 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:19.271461 | orchestrator | 2025-06-01 05:00:19 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:19.271555 | orchestrator | 2025-06-01 05:00:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:22.302081 | orchestrator | 2025-06-01 05:00:22 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:22.302283 | orchestrator | 2025-06-01 05:00:22 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:22.302847 | orchestrator | 2025-06-01 05:00:22 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:22.303519 | orchestrator | 2025-06-01 05:00:22 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:22.303593 | orchestrator | 2025-06-01 05:00:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:25.365518 | orchestrator | 2025-06-01 05:00:25 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:25.367009 | orchestrator | 2025-06-01 05:00:25 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:25.367960 | orchestrator | 2025-06-01 05:00:25 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:25.369008 | orchestrator | 2025-06-01 05:00:25 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:25.369137 | orchestrator | 2025-06-01 05:00:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:28.428220 | orchestrator | 2025-06-01 05:00:28 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:28.429172 | orchestrator | 2025-06-01 05:00:28 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:28.432288 | orchestrator | 2025-06-01 05:00:28 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:28.433392 | orchestrator | 2025-06-01 05:00:28 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:28.433423 | orchestrator | 2025-06-01 05:00:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:31.485111 | orchestrator | 2025-06-01 05:00:31 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:31.485856 | orchestrator | 2025-06-01 05:00:31 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:31.486550 | orchestrator | 2025-06-01 05:00:31 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:31.488398 | orchestrator | 2025-06-01 05:00:31 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:31.488489 | orchestrator | 2025-06-01 05:00:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:34.529848 | orchestrator | 2025-06-01 05:00:34 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state STARTED 2025-06-01 05:00:34.531283 | orchestrator | 2025-06-01 05:00:34 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:34.533375 | orchestrator | 2025-06-01 05:00:34 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:34.534181 | orchestrator | 2025-06-01 05:00:34 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:34.534223 | orchestrator | 2025-06-01 05:00:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:37.563384 | orchestrator | 2025-06-01 05:00:37 | INFO  | Task ba343375-9757-46ee-a031-47239915960b is in state SUCCESS 2025-06-01 05:00:37.564861 | orchestrator | 2025-06-01 05:00:37.564944 | orchestrator | None 2025-06-01 05:00:37.564959 | orchestrator | 2025-06-01 05:00:37.564971 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:00:37.564983 | orchestrator | 2025-06-01 05:00:37.564995 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:00:37.565006 | orchestrator | Sunday 01 June 2025 04:58:32 +0000 (0:00:00.254) 0:00:00.254 *********** 2025-06-01 05:00:37.565029 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:00:37.565042 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:00:37.565053 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:00:37.565087 | orchestrator | 2025-06-01 05:00:37.565099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:00:37.565111 | orchestrator | Sunday 01 June 2025 04:58:32 +0000 (0:00:00.300) 0:00:00.554 *********** 2025-06-01 05:00:37.565122 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-01 05:00:37.565133 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-01 05:00:37.565144 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-01 05:00:37.565155 | orchestrator | 2025-06-01 05:00:37.565166 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-01 05:00:37.565177 | orchestrator | 2025-06-01 05:00:37.565188 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-01 05:00:37.565199 | orchestrator | Sunday 01 June 2025 04:58:33 +0000 (0:00:00.448) 0:00:01.003 *********** 2025-06-01 05:00:37.565210 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:00:37.565222 | orchestrator | 2025-06-01 05:00:37.565233 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-01 05:00:37.565244 | orchestrator | Sunday 01 June 2025 04:58:33 +0000 (0:00:00.512) 0:00:01.515 *********** 2025-06-01 05:00:37.565256 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-01 05:00:37.565266 | orchestrator | 2025-06-01 05:00:37.565277 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-01 05:00:37.565288 | orchestrator | Sunday 01 June 2025 04:58:36 +0000 (0:00:03.068) 0:00:04.584 *********** 2025-06-01 05:00:37.565299 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-01 05:00:37.565310 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-01 05:00:37.565321 | orchestrator | 2025-06-01 05:00:37.565331 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-01 05:00:37.565356 | orchestrator | Sunday 01 June 2025 04:58:42 +0000 (0:00:05.930) 0:00:10.515 *********** 2025-06-01 05:00:37.565367 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:00:37.565378 | orchestrator | 2025-06-01 05:00:37.565389 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-01 05:00:37.565400 | orchestrator | Sunday 01 June 2025 04:58:45 +0000 (0:00:02.944) 0:00:13.459 *********** 2025-06-01 05:00:37.565411 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:00:37.565422 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-01 05:00:37.565435 | orchestrator | 2025-06-01 05:00:37.565448 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-01 05:00:37.565466 | orchestrator | Sunday 01 June 2025 04:58:49 +0000 (0:00:03.532) 0:00:16.992 *********** 2025-06-01 05:00:37.565484 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:00:37.565504 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-01 05:00:37.565524 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-01 05:00:37.565544 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-01 05:00:37.565564 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-01 05:00:37.565585 | orchestrator | 2025-06-01 05:00:37.565606 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-01 05:00:37.565625 | orchestrator | Sunday 01 June 2025 04:59:03 +0000 (0:00:14.363) 0:00:31.355 *********** 2025-06-01 05:00:37.565645 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-01 05:00:37.565662 | orchestrator | 2025-06-01 05:00:37.565681 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-01 05:00:37.565699 | orchestrator | Sunday 01 June 2025 04:59:07 +0000 (0:00:04.082) 0:00:35.438 *********** 2025-06-01 05:00:37.565725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.565788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.565834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.565858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.565883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.565941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566157 | orchestrator | 2025-06-01 05:00:37.566178 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-01 05:00:37.566199 | orchestrator | Sunday 01 June 2025 04:59:09 +0000 (0:00:01.681) 0:00:37.120 *********** 2025-06-01 05:00:37.566220 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-01 05:00:37.566239 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-01 05:00:37.566258 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-01 05:00:37.566277 | orchestrator | 2025-06-01 05:00:37.566296 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-01 05:00:37.566323 | orchestrator | Sunday 01 June 2025 04:59:10 +0000 (0:00:01.314) 0:00:38.435 *********** 2025-06-01 05:00:37.566335 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.566346 | orchestrator | 2025-06-01 05:00:37.566357 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-01 05:00:37.566370 | orchestrator | Sunday 01 June 2025 04:59:10 +0000 (0:00:00.159) 0:00:38.594 *********** 2025-06-01 05:00:37.566388 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.566408 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:00:37.566427 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:00:37.566447 | orchestrator | 2025-06-01 05:00:37.566467 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-01 05:00:37.566483 | orchestrator | Sunday 01 June 2025 04:59:11 +0000 (0:00:00.715) 0:00:39.310 *********** 2025-06-01 05:00:37.566512 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:00:37.566531 | orchestrator | 2025-06-01 05:00:37.566550 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-01 05:00:37.566568 | orchestrator | Sunday 01 June 2025 04:59:12 +0000 (0:00:01.001) 0:00:40.311 *********** 2025-06-01 05:00:37.566588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.566622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.566644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.566671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.566779 | orchestrator | 2025-06-01 05:00:37.566790 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-01 05:00:37.566833 | orchestrator | Sunday 01 June 2025 04:59:16 +0000 (0:00:03.827) 0:00:44.138 *********** 2025-06-01 05:00:37.566851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.566870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.566884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.566938 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.566958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.566971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.566982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.566993 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:00:37.567013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.567045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567081 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:00:37.567092 | orchestrator | 2025-06-01 05:00:37.567107 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-01 05:00:37.567126 | orchestrator | Sunday 01 June 2025 04:59:17 +0000 (0:00:00.844) 0:00:44.982 *********** 2025-06-01 05:00:37.567177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.567201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567245 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.567257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.567269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567292 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:00:37.567311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.567323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.567359 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:00:37.567370 | orchestrator | 2025-06-01 05:00:37.567381 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-01 05:00:37.567392 | orchestrator | Sunday 01 June 2025 04:59:17 +0000 (0:00:00.666) 0:00:45.649 *********** 2025-06-01 05:00:37.567403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.567421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.567432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.567450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567530 | orchestrator | 2025-06-01 05:00:37.567541 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-01 05:00:37.567558 | orchestrator | Sunday 01 June 2025 04:59:21 +0000 (0:00:03.512) 0:00:49.162 *********** 2025-06-01 05:00:37.567569 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.567580 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:00:37.567591 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:00:37.567602 | orchestrator | 2025-06-01 05:00:37.567612 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-01 05:00:37.567623 | orchestrator | Sunday 01 June 2025 04:59:23 +0000 (0:00:02.336) 0:00:51.499 *********** 2025-06-01 05:00:37.567634 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 05:00:37.567645 | orchestrator | 2025-06-01 05:00:37.567655 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-01 05:00:37.567666 | orchestrator | Sunday 01 June 2025 04:59:25 +0000 (0:00:01.728) 0:00:53.227 *********** 2025-06-01 05:00:37.567677 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.567688 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:00:37.567698 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:00:37.567709 | orchestrator | 2025-06-01 05:00:37.567720 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-01 05:00:37.567730 | orchestrator | Sunday 01 June 2025 04:59:27 +0000 (0:00:01.612) 0:00:54.840 *********** 2025-06-01 05:00:37.567746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.567758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.567778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.567800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.567872 | orchestrator | 2025-06-01 05:00:37.567883 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-01 05:00:37.567989 | orchestrator | Sunday 01 June 2025 04:59:38 +0000 (0:00:11.057) 0:01:05.897 *********** 2025-06-01 05:00:37.568002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.568013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.568030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.568042 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.568053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.568064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.568090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.568102 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:00:37.568114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 05:00:37.568130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.568141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:00:37.568153 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:00:37.568164 | orchestrator | 2025-06-01 05:00:37.568176 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-01 05:00:37.568196 | orchestrator | Sunday 01 June 2025 04:59:39 +0000 (0:00:01.084) 0:01:06.982 *********** 2025-06-01 05:00:37.568217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.568260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.568281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 05:00:37.568293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.568304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.568315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.568333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.568355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.568375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:00:37.568393 | orchestrator | 2025-06-01 05:00:37.568409 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-01 05:00:37.568425 | orchestrator | Sunday 01 June 2025 04:59:42 +0000 (0:00:03.017) 0:01:10.000 *********** 2025-06-01 05:00:37.568442 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:00:37.568460 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:00:37.568476 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:00:37.568494 | orchestrator | 2025-06-01 05:00:37.568512 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-01 05:00:37.568523 | orchestrator | Sunday 01 June 2025 04:59:42 +0000 (0:00:00.394) 0:01:10.394 *********** 2025-06-01 05:00:37.568532 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.568542 | orchestrator | 2025-06-01 05:00:37.568661 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-01 05:00:37.568689 | orchestrator | Sunday 01 June 2025 04:59:44 +0000 (0:00:02.110) 0:01:12.505 *********** 2025-06-01 05:00:37.568699 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.568709 | orchestrator | 2025-06-01 05:00:37.568719 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-01 05:00:37.568736 | orchestrator | Sunday 01 June 2025 04:59:46 +0000 (0:00:02.182) 0:01:14.688 *********** 2025-06-01 05:00:37.568759 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.568775 | orchestrator | 2025-06-01 05:00:37.568792 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-01 05:00:37.568810 | orchestrator | Sunday 01 June 2025 04:59:58 +0000 (0:00:11.661) 0:01:26.349 *********** 2025-06-01 05:00:37.568827 | orchestrator | 2025-06-01 05:00:37.568844 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-01 05:00:37.568854 | orchestrator | Sunday 01 June 2025 04:59:58 +0000 (0:00:00.132) 0:01:26.481 *********** 2025-06-01 05:00:37.568866 | orchestrator | 2025-06-01 05:00:37.568882 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-01 05:00:37.568925 | orchestrator | Sunday 01 June 2025 04:59:58 +0000 (0:00:00.155) 0:01:26.637 *********** 2025-06-01 05:00:37.568943 | orchestrator | 2025-06-01 05:00:37.568960 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-01 05:00:37.568989 | orchestrator | Sunday 01 June 2025 04:59:59 +0000 (0:00:00.147) 0:01:26.784 *********** 2025-06-01 05:00:37.569003 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:00:37.569014 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.569023 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:00:37.569048 | orchestrator | 2025-06-01 05:00:37.569068 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-01 05:00:37.569079 | orchestrator | Sunday 01 June 2025 05:00:11 +0000 (0:00:12.906) 0:01:39.690 *********** 2025-06-01 05:00:37.569088 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.569098 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:00:37.569108 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:00:37.569118 | orchestrator | 2025-06-01 05:00:37.569127 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-01 05:00:37.569137 | orchestrator | Sunday 01 June 2025 05:00:23 +0000 (0:00:11.853) 0:01:51.544 *********** 2025-06-01 05:00:37.569146 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:00:37.569156 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:00:37.569166 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:00:37.569175 | orchestrator | 2025-06-01 05:00:37.569185 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:00:37.569196 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 05:00:37.569207 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 05:00:37.569216 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 05:00:37.569226 | orchestrator | 2025-06-01 05:00:37.569236 | orchestrator | 2025-06-01 05:00:37.569246 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:00:37.569255 | orchestrator | Sunday 01 June 2025 05:00:36 +0000 (0:00:12.938) 0:02:04.483 *********** 2025-06-01 05:00:37.569265 | orchestrator | =============================================================================== 2025-06-01 05:00:37.569275 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.36s 2025-06-01 05:00:37.569295 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.94s 2025-06-01 05:00:37.569305 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.91s 2025-06-01 05:00:37.569314 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.85s 2025-06-01 05:00:37.569324 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.66s 2025-06-01 05:00:37.569334 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.06s 2025-06-01 05:00:37.569343 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.93s 2025-06-01 05:00:37.569353 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.08s 2025-06-01 05:00:37.569362 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.83s 2025-06-01 05:00:37.569372 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.53s 2025-06-01 05:00:37.569382 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.51s 2025-06-01 05:00:37.569391 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.07s 2025-06-01 05:00:37.569401 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.02s 2025-06-01 05:00:37.569410 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.94s 2025-06-01 05:00:37.569420 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.34s 2025-06-01 05:00:37.569430 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.18s 2025-06-01 05:00:37.569440 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2025-06-01 05:00:37.569456 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.73s 2025-06-01 05:00:37.569466 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.68s 2025-06-01 05:00:37.569475 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.61s 2025-06-01 05:00:37.569487 | orchestrator | 2025-06-01 05:00:37 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:37.569505 | orchestrator | 2025-06-01 05:00:37 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:37.569672 | orchestrator | 2025-06-01 05:00:37 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:37.569688 | orchestrator | 2025-06-01 05:00:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:40.593419 | orchestrator | 2025-06-01 05:00:40 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:40.593612 | orchestrator | 2025-06-01 05:00:40 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:40.594325 | orchestrator | 2025-06-01 05:00:40 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:40.595263 | orchestrator | 2025-06-01 05:00:40 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:40.595298 | orchestrator | 2025-06-01 05:00:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:43.619735 | orchestrator | 2025-06-01 05:00:43 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:43.619862 | orchestrator | 2025-06-01 05:00:43 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:43.620432 | orchestrator | 2025-06-01 05:00:43 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:43.621350 | orchestrator | 2025-06-01 05:00:43 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:43.621394 | orchestrator | 2025-06-01 05:00:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:46.644367 | orchestrator | 2025-06-01 05:00:46 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:46.644502 | orchestrator | 2025-06-01 05:00:46 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:46.644945 | orchestrator | 2025-06-01 05:00:46 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:46.645535 | orchestrator | 2025-06-01 05:00:46 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:46.645558 | orchestrator | 2025-06-01 05:00:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:49.671832 | orchestrator | 2025-06-01 05:00:49 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:49.672102 | orchestrator | 2025-06-01 05:00:49 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:49.672847 | orchestrator | 2025-06-01 05:00:49 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:49.673571 | orchestrator | 2025-06-01 05:00:49 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:49.673600 | orchestrator | 2025-06-01 05:00:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:52.719255 | orchestrator | 2025-06-01 05:00:52 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:52.721181 | orchestrator | 2025-06-01 05:00:52 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:52.724496 | orchestrator | 2025-06-01 05:00:52 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:52.728384 | orchestrator | 2025-06-01 05:00:52 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:52.729186 | orchestrator | 2025-06-01 05:00:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:55.770138 | orchestrator | 2025-06-01 05:00:55 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:55.771051 | orchestrator | 2025-06-01 05:00:55 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:55.772067 | orchestrator | 2025-06-01 05:00:55 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:55.772678 | orchestrator | 2025-06-01 05:00:55 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:55.772709 | orchestrator | 2025-06-01 05:00:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:00:58.802652 | orchestrator | 2025-06-01 05:00:58 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:00:58.802746 | orchestrator | 2025-06-01 05:00:58 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:00:58.805858 | orchestrator | 2025-06-01 05:00:58 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:00:58.805917 | orchestrator | 2025-06-01 05:00:58 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:00:58.805947 | orchestrator | 2025-06-01 05:00:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:01.833531 | orchestrator | 2025-06-01 05:01:01 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:01.835005 | orchestrator | 2025-06-01 05:01:01 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:01.835752 | orchestrator | 2025-06-01 05:01:01 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:01.837393 | orchestrator | 2025-06-01 05:01:01 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:01.837425 | orchestrator | 2025-06-01 05:01:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:04.884596 | orchestrator | 2025-06-01 05:01:04 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:04.884705 | orchestrator | 2025-06-01 05:01:04 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:04.884722 | orchestrator | 2025-06-01 05:01:04 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:04.884734 | orchestrator | 2025-06-01 05:01:04 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:04.884746 | orchestrator | 2025-06-01 05:01:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:07.929994 | orchestrator | 2025-06-01 05:01:07 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:07.930355 | orchestrator | 2025-06-01 05:01:07 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:07.931311 | orchestrator | 2025-06-01 05:01:07 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:07.933279 | orchestrator | 2025-06-01 05:01:07 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:07.933323 | orchestrator | 2025-06-01 05:01:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:10.974290 | orchestrator | 2025-06-01 05:01:10 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:10.974414 | orchestrator | 2025-06-01 05:01:10 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:10.974430 | orchestrator | 2025-06-01 05:01:10 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:10.974661 | orchestrator | 2025-06-01 05:01:10 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:10.974888 | orchestrator | 2025-06-01 05:01:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:14.037196 | orchestrator | 2025-06-01 05:01:14 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:14.038338 | orchestrator | 2025-06-01 05:01:14 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:14.038949 | orchestrator | 2025-06-01 05:01:14 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:14.040318 | orchestrator | 2025-06-01 05:01:14 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:14.040357 | orchestrator | 2025-06-01 05:01:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:17.103489 | orchestrator | 2025-06-01 05:01:17 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:17.105226 | orchestrator | 2025-06-01 05:01:17 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:17.111756 | orchestrator | 2025-06-01 05:01:17 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:17.112752 | orchestrator | 2025-06-01 05:01:17 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:17.112787 | orchestrator | 2025-06-01 05:01:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:20.158550 | orchestrator | 2025-06-01 05:01:20 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:20.158842 | orchestrator | 2025-06-01 05:01:20 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:20.159963 | orchestrator | 2025-06-01 05:01:20 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:20.160867 | orchestrator | 2025-06-01 05:01:20 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:20.161034 | orchestrator | 2025-06-01 05:01:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:23.199547 | orchestrator | 2025-06-01 05:01:23 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state STARTED 2025-06-01 05:01:23.201661 | orchestrator | 2025-06-01 05:01:23 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:23.205249 | orchestrator | 2025-06-01 05:01:23 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:23.209856 | orchestrator | 2025-06-01 05:01:23 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:23.215113 | orchestrator | 2025-06-01 05:01:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:26.259484 | orchestrator | 2025-06-01 05:01:26 | INFO  | Task b144fac2-ad20-44e7-a60c-6e3cd744f06f is in state SUCCESS 2025-06-01 05:01:26.261183 | orchestrator | 2025-06-01 05:01:26 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:26.261960 | orchestrator | 2025-06-01 05:01:26 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:26.263180 | orchestrator | 2025-06-01 05:01:26 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:26.264416 | orchestrator | 2025-06-01 05:01:26 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:26.264512 | orchestrator | 2025-06-01 05:01:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:29.310412 | orchestrator | 2025-06-01 05:01:29 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:29.310583 | orchestrator | 2025-06-01 05:01:29 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:29.315960 | orchestrator | 2025-06-01 05:01:29 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:29.316514 | orchestrator | 2025-06-01 05:01:29 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:29.316922 | orchestrator | 2025-06-01 05:01:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:32.359866 | orchestrator | 2025-06-01 05:01:32 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:32.361192 | orchestrator | 2025-06-01 05:01:32 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:32.362774 | orchestrator | 2025-06-01 05:01:32 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:32.364741 | orchestrator | 2025-06-01 05:01:32 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:32.364783 | orchestrator | 2025-06-01 05:01:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:35.399468 | orchestrator | 2025-06-01 05:01:35 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:35.401031 | orchestrator | 2025-06-01 05:01:35 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:35.401583 | orchestrator | 2025-06-01 05:01:35 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:35.403404 | orchestrator | 2025-06-01 05:01:35 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:35.403458 | orchestrator | 2025-06-01 05:01:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:38.436398 | orchestrator | 2025-06-01 05:01:38 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:38.436501 | orchestrator | 2025-06-01 05:01:38 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:38.439433 | orchestrator | 2025-06-01 05:01:38 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:38.440446 | orchestrator | 2025-06-01 05:01:38 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:38.440534 | orchestrator | 2025-06-01 05:01:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:41.470647 | orchestrator | 2025-06-01 05:01:41 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:41.470965 | orchestrator | 2025-06-01 05:01:41 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:41.473588 | orchestrator | 2025-06-01 05:01:41 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:41.474147 | orchestrator | 2025-06-01 05:01:41 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:41.474202 | orchestrator | 2025-06-01 05:01:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:44.509251 | orchestrator | 2025-06-01 05:01:44 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:44.509548 | orchestrator | 2025-06-01 05:01:44 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:44.510364 | orchestrator | 2025-06-01 05:01:44 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:44.511527 | orchestrator | 2025-06-01 05:01:44 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:44.511579 | orchestrator | 2025-06-01 05:01:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:47.551734 | orchestrator | 2025-06-01 05:01:47 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:47.553067 | orchestrator | 2025-06-01 05:01:47 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:47.555325 | orchestrator | 2025-06-01 05:01:47 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:47.557003 | orchestrator | 2025-06-01 05:01:47 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:47.557239 | orchestrator | 2025-06-01 05:01:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:50.613316 | orchestrator | 2025-06-01 05:01:50 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:50.615797 | orchestrator | 2025-06-01 05:01:50 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:50.618447 | orchestrator | 2025-06-01 05:01:50 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:50.620984 | orchestrator | 2025-06-01 05:01:50 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:50.620998 | orchestrator | 2025-06-01 05:01:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:53.684989 | orchestrator | 2025-06-01 05:01:53 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:53.687869 | orchestrator | 2025-06-01 05:01:53 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:53.687939 | orchestrator | 2025-06-01 05:01:53 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:53.689137 | orchestrator | 2025-06-01 05:01:53 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:53.689161 | orchestrator | 2025-06-01 05:01:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:56.727389 | orchestrator | 2025-06-01 05:01:56 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:56.727489 | orchestrator | 2025-06-01 05:01:56 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:56.727504 | orchestrator | 2025-06-01 05:01:56 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:56.727810 | orchestrator | 2025-06-01 05:01:56 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:56.727836 | orchestrator | 2025-06-01 05:01:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:01:59.776508 | orchestrator | 2025-06-01 05:01:59 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:01:59.777779 | orchestrator | 2025-06-01 05:01:59 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:01:59.779148 | orchestrator | 2025-06-01 05:01:59 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:01:59.780683 | orchestrator | 2025-06-01 05:01:59 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:01:59.780840 | orchestrator | 2025-06-01 05:01:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:02.833863 | orchestrator | 2025-06-01 05:02:02 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:02.834806 | orchestrator | 2025-06-01 05:02:02 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:02.834845 | orchestrator | 2025-06-01 05:02:02 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:02.835655 | orchestrator | 2025-06-01 05:02:02 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:02.835774 | orchestrator | 2025-06-01 05:02:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:05.880611 | orchestrator | 2025-06-01 05:02:05 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:05.882144 | orchestrator | 2025-06-01 05:02:05 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:05.883595 | orchestrator | 2025-06-01 05:02:05 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:05.885081 | orchestrator | 2025-06-01 05:02:05 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:05.885100 | orchestrator | 2025-06-01 05:02:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:08.930295 | orchestrator | 2025-06-01 05:02:08 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:08.931082 | orchestrator | 2025-06-01 05:02:08 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:08.932416 | orchestrator | 2025-06-01 05:02:08 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:08.933245 | orchestrator | 2025-06-01 05:02:08 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:08.935006 | orchestrator | 2025-06-01 05:02:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:11.971397 | orchestrator | 2025-06-01 05:02:11 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:11.971934 | orchestrator | 2025-06-01 05:02:11 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:11.972474 | orchestrator | 2025-06-01 05:02:11 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:11.973267 | orchestrator | 2025-06-01 05:02:11 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:11.973294 | orchestrator | 2025-06-01 05:02:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:15.009006 | orchestrator | 2025-06-01 05:02:15 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:15.009972 | orchestrator | 2025-06-01 05:02:15 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:15.010210 | orchestrator | 2025-06-01 05:02:15 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:15.011080 | orchestrator | 2025-06-01 05:02:15 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:15.011115 | orchestrator | 2025-06-01 05:02:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:18.058788 | orchestrator | 2025-06-01 05:02:18 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:18.060318 | orchestrator | 2025-06-01 05:02:18 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:18.061958 | orchestrator | 2025-06-01 05:02:18 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:18.063783 | orchestrator | 2025-06-01 05:02:18 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:18.064166 | orchestrator | 2025-06-01 05:02:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:21.103886 | orchestrator | 2025-06-01 05:02:21 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:21.104958 | orchestrator | 2025-06-01 05:02:21 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:21.106607 | orchestrator | 2025-06-01 05:02:21 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:21.108290 | orchestrator | 2025-06-01 05:02:21 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:21.108318 | orchestrator | 2025-06-01 05:02:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:24.164411 | orchestrator | 2025-06-01 05:02:24 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:24.164525 | orchestrator | 2025-06-01 05:02:24 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:24.165636 | orchestrator | 2025-06-01 05:02:24 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:24.167321 | orchestrator | 2025-06-01 05:02:24 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:24.167363 | orchestrator | 2025-06-01 05:02:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:27.209558 | orchestrator | 2025-06-01 05:02:27 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:27.211132 | orchestrator | 2025-06-01 05:02:27 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:27.212068 | orchestrator | 2025-06-01 05:02:27 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:27.213638 | orchestrator | 2025-06-01 05:02:27 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:27.213685 | orchestrator | 2025-06-01 05:02:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:30.261762 | orchestrator | 2025-06-01 05:02:30 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:30.263755 | orchestrator | 2025-06-01 05:02:30 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state STARTED 2025-06-01 05:02:30.265436 | orchestrator | 2025-06-01 05:02:30 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:30.267397 | orchestrator | 2025-06-01 05:02:30 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:30.267500 | orchestrator | 2025-06-01 05:02:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:33.324258 | orchestrator | 2025-06-01 05:02:33 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:33.327625 | orchestrator | 2025-06-01 05:02:33 | INFO  | Task 5d46a243-2f79-4d5b-bdf9-534405fd9688 is in state SUCCESS 2025-06-01 05:02:33.329539 | orchestrator | 2025-06-01 05:02:33.329586 | orchestrator | 2025-06-01 05:02:33.329602 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-01 05:02:33.329622 | orchestrator | 2025-06-01 05:02:33.329652 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-01 05:02:33.329673 | orchestrator | Sunday 01 June 2025 05:00:45 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-01 05:02:33.329693 | orchestrator | changed: [localhost] 2025-06-01 05:02:33.329714 | orchestrator | 2025-06-01 05:02:33.329734 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-01 05:02:33.329783 | orchestrator | Sunday 01 June 2025 05:00:45 +0000 (0:00:00.866) 0:00:01.046 *********** 2025-06-01 05:02:33.329795 | orchestrator | changed: [localhost] 2025-06-01 05:02:33.329806 | orchestrator | 2025-06-01 05:02:33.329817 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-01 05:02:33.329828 | orchestrator | Sunday 01 June 2025 05:01:17 +0000 (0:00:31.672) 0:00:32.719 *********** 2025-06-01 05:02:33.329839 | orchestrator | changed: [localhost] 2025-06-01 05:02:33.329955 | orchestrator | 2025-06-01 05:02:33.329977 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:02:33.329996 | orchestrator | 2025-06-01 05:02:33.330008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:02:33.330076 | orchestrator | Sunday 01 June 2025 05:01:22 +0000 (0:00:04.525) 0:00:37.245 *********** 2025-06-01 05:02:33.330088 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:33.330099 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:33.330110 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:33.330121 | orchestrator | 2025-06-01 05:02:33.330132 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:02:33.330145 | orchestrator | Sunday 01 June 2025 05:01:22 +0000 (0:00:00.361) 0:00:37.606 *********** 2025-06-01 05:02:33.330159 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-01 05:02:33.330172 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-01 05:02:33.330186 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-01 05:02:33.330199 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-01 05:02:33.330212 | orchestrator | 2025-06-01 05:02:33.330225 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-01 05:02:33.330237 | orchestrator | skipping: no hosts matched 2025-06-01 05:02:33.330251 | orchestrator | 2025-06-01 05:02:33.330264 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:02:33.330277 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:02:33.330292 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:02:33.330305 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:02:33.330316 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:02:33.330327 | orchestrator | 2025-06-01 05:02:33.330338 | orchestrator | 2025-06-01 05:02:33.330348 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:02:33.330359 | orchestrator | Sunday 01 June 2025 05:01:22 +0000 (0:00:00.520) 0:00:38.126 *********** 2025-06-01 05:02:33.330370 | orchestrator | =============================================================================== 2025-06-01 05:02:33.330381 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.67s 2025-06-01 05:02:33.330392 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.53s 2025-06-01 05:02:33.330402 | orchestrator | Ensure the destination directory exists --------------------------------- 0.87s 2025-06-01 05:02:33.330413 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-06-01 05:02:33.330440 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-06-01 05:02:33.330451 | orchestrator | 2025-06-01 05:02:33.330462 | orchestrator | 2025-06-01 05:02:33.330472 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:02:33.330483 | orchestrator | 2025-06-01 05:02:33.330494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:02:33.330504 | orchestrator | Sunday 01 June 2025 05:01:29 +0000 (0:00:00.296) 0:00:00.297 *********** 2025-06-01 05:02:33.330526 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:33.330538 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:33.330548 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:33.330559 | orchestrator | 2025-06-01 05:02:33.330570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:02:33.330580 | orchestrator | Sunday 01 June 2025 05:01:29 +0000 (0:00:00.314) 0:00:00.611 *********** 2025-06-01 05:02:33.330591 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-01 05:02:33.330606 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-01 05:02:33.330625 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-01 05:02:33.330642 | orchestrator | 2025-06-01 05:02:33.330659 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-01 05:02:33.330679 | orchestrator | 2025-06-01 05:02:33.330700 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-01 05:02:33.330718 | orchestrator | Sunday 01 June 2025 05:01:29 +0000 (0:00:00.364) 0:00:00.976 *********** 2025-06-01 05:02:33.330755 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:02:33.330767 | orchestrator | 2025-06-01 05:02:33.330778 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-01 05:02:33.330789 | orchestrator | Sunday 01 June 2025 05:01:30 +0000 (0:00:00.554) 0:00:01.531 *********** 2025-06-01 05:02:33.330815 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-01 05:02:33.330827 | orchestrator | 2025-06-01 05:02:33.330838 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-01 05:02:33.330849 | orchestrator | Sunday 01 June 2025 05:01:33 +0000 (0:00:03.289) 0:00:04.820 *********** 2025-06-01 05:02:33.330887 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-01 05:02:33.330926 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-01 05:02:33.330938 | orchestrator | 2025-06-01 05:02:33.330949 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-01 05:02:33.330960 | orchestrator | Sunday 01 June 2025 05:01:40 +0000 (0:00:06.334) 0:00:11.154 *********** 2025-06-01 05:02:33.330971 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:02:33.330982 | orchestrator | 2025-06-01 05:02:33.330993 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-01 05:02:33.331004 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:03.071) 0:00:14.226 *********** 2025-06-01 05:02:33.331014 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:02:33.331025 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-01 05:02:33.331036 | orchestrator | 2025-06-01 05:02:33.331046 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-01 05:02:33.331057 | orchestrator | Sunday 01 June 2025 05:01:46 +0000 (0:00:03.461) 0:00:17.687 *********** 2025-06-01 05:02:33.331068 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:02:33.331079 | orchestrator | 2025-06-01 05:02:33.331089 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-01 05:02:33.331100 | orchestrator | Sunday 01 June 2025 05:01:49 +0000 (0:00:02.899) 0:00:20.586 *********** 2025-06-01 05:02:33.331111 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-01 05:02:33.331122 | orchestrator | 2025-06-01 05:02:33.331132 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-01 05:02:33.331143 | orchestrator | Sunday 01 June 2025 05:01:54 +0000 (0:00:04.665) 0:00:25.252 *********** 2025-06-01 05:02:33.331154 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:33.331164 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:33.331175 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:33.331186 | orchestrator | 2025-06-01 05:02:33.331206 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-01 05:02:33.331217 | orchestrator | Sunday 01 June 2025 05:01:54 +0000 (0:00:00.639) 0:00:25.891 *********** 2025-06-01 05:02:33.331232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.331256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.331279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.331291 | orchestrator | 2025-06-01 05:02:33.331302 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-01 05:02:33.331313 | orchestrator | Sunday 01 June 2025 05:01:56 +0000 (0:00:01.451) 0:00:27.343 *********** 2025-06-01 05:02:33.331324 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:33.331335 | orchestrator | 2025-06-01 05:02:33.331346 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-01 05:02:33.331357 | orchestrator | Sunday 01 June 2025 05:01:56 +0000 (0:00:00.118) 0:00:27.462 *********** 2025-06-01 05:02:33.331368 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:33.331378 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:33.331389 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:33.331400 | orchestrator | 2025-06-01 05:02:33.331411 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-01 05:02:33.331421 | orchestrator | Sunday 01 June 2025 05:01:56 +0000 (0:00:00.415) 0:00:27.877 *********** 2025-06-01 05:02:33.331432 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:02:33.331450 | orchestrator | 2025-06-01 05:02:33.331461 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-01 05:02:33.331471 | orchestrator | Sunday 01 June 2025 05:01:57 +0000 (0:00:00.475) 0:00:28.353 *********** 2025-06-01 05:02:33.331483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.331500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.331512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.331524 | orchestrator | 2025-06-01 05:02:33.331541 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-01 05:02:33.331553 | orchestrator | Sunday 01 June 2025 05:01:58 +0000 (0:00:01.246) 0:00:29.599 *********** 2025-06-01 05:02:33.331564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.331582 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:33.331594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.331612 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:33.331638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.331656 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:33.331676 | orchestrator | 2025-06-01 05:02:33.331696 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-01 05:02:33.331748 | orchestrator | Sunday 01 June 2025 05:01:59 +0000 (0:00:00.601) 0:00:30.200 *********** 2025-06-01 05:02:33.331770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.331789 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:33.331822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.331855 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:33.331875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.331918 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:33.331937 | orchestrator | 2025-06-01 05:02:33.331955 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-01 05:02:33.331973 | orchestrator | Sunday 01 June 2025 05:01:59 +0000 (0:00:00.621) 0:00:30.822 *********** 2025-06-01 05:02:33.332034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332124 | orchestrator | 2025-06-01 05:02:33.332143 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-01 05:02:33.332163 | orchestrator | Sunday 01 June 2025 05:02:01 +0000 (0:00:01.316) 0:00:32.138 *********** 2025-06-01 05:02:33.332184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332254 | orchestrator | 2025-06-01 05:02:33.332274 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-01 05:02:33.332287 | orchestrator | Sunday 01 June 2025 05:02:04 +0000 (0:00:03.025) 0:00:35.164 *********** 2025-06-01 05:02:33.332298 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-01 05:02:33.332309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-01 05:02:33.332320 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-01 05:02:33.332331 | orchestrator | 2025-06-01 05:02:33.332342 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-01 05:02:33.332367 | orchestrator | Sunday 01 June 2025 05:02:05 +0000 (0:00:01.420) 0:00:36.585 *********** 2025-06-01 05:02:33.332379 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:33.332390 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:33.332401 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:33.332411 | orchestrator | 2025-06-01 05:02:33.332423 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-01 05:02:33.332434 | orchestrator | Sunday 01 June 2025 05:02:06 +0000 (0:00:01.296) 0:00:37.881 *********** 2025-06-01 05:02:33.332445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.332457 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:33.332468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.332480 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:33.332496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 05:02:33.332507 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:33.332518 | orchestrator | 2025-06-01 05:02:33.332529 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-01 05:02:33.332540 | orchestrator | Sunday 01 June 2025 05:02:07 +0000 (0:00:00.499) 0:00:38.380 *********** 2025-06-01 05:02:33.332559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 05:02:33.332603 | orchestrator | 2025-06-01 05:02:33.332622 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-01 05:02:33.332640 | orchestrator | Sunday 01 June 2025 05:02:09 +0000 (0:00:02.098) 0:00:40.478 *********** 2025-06-01 05:02:33.332658 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:33.332676 | orchestrator | 2025-06-01 05:02:33.332695 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-01 05:02:33.332714 | orchestrator | Sunday 01 June 2025 05:02:11 +0000 (0:00:01.968) 0:00:42.447 *********** 2025-06-01 05:02:33.332732 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:33.332749 | orchestrator | 2025-06-01 05:02:33.332761 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-01 05:02:33.332772 | orchestrator | Sunday 01 June 2025 05:02:13 +0000 (0:00:02.139) 0:00:44.587 *********** 2025-06-01 05:02:33.332782 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:33.332793 | orchestrator | 2025-06-01 05:02:33.332804 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-01 05:02:33.332815 | orchestrator | Sunday 01 June 2025 05:02:26 +0000 (0:00:12.872) 0:00:57.460 *********** 2025-06-01 05:02:33.332825 | orchestrator | 2025-06-01 05:02:33.332836 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-01 05:02:33.332853 | orchestrator | Sunday 01 June 2025 05:02:26 +0000 (0:00:00.061) 0:00:57.521 *********** 2025-06-01 05:02:33.332864 | orchestrator | 2025-06-01 05:02:33.332875 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-01 05:02:33.333061 | orchestrator | Sunday 01 June 2025 05:02:26 +0000 (0:00:00.064) 0:00:57.586 *********** 2025-06-01 05:02:33.333104 | orchestrator | 2025-06-01 05:02:33.333115 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-01 05:02:33.333126 | orchestrator | Sunday 01 June 2025 05:02:26 +0000 (0:00:00.073) 0:00:57.660 *********** 2025-06-01 05:02:33.333137 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:33.333149 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:33.333160 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:33.333171 | orchestrator | 2025-06-01 05:02:33.333182 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:02:33.333193 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 05:02:33.333206 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 05:02:33.333217 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 05:02:33.333228 | orchestrator | 2025-06-01 05:02:33.333239 | orchestrator | 2025-06-01 05:02:33.333249 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:02:33.333260 | orchestrator | Sunday 01 June 2025 05:02:32 +0000 (0:00:05.645) 0:01:03.306 *********** 2025-06-01 05:02:33.333270 | orchestrator | =============================================================================== 2025-06-01 05:02:33.333292 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.87s 2025-06-01 05:02:33.333302 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.33s 2025-06-01 05:02:33.333312 | orchestrator | placement : Restart placement-api container ----------------------------- 5.65s 2025-06-01 05:02:33.333322 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.67s 2025-06-01 05:02:33.333331 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.46s 2025-06-01 05:02:33.333341 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.29s 2025-06-01 05:02:33.333350 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.07s 2025-06-01 05:02:33.333360 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.03s 2025-06-01 05:02:33.333369 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.90s 2025-06-01 05:02:33.333379 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.14s 2025-06-01 05:02:33.333388 | orchestrator | placement : Check placement containers ---------------------------------- 2.10s 2025-06-01 05:02:33.333401 | orchestrator | placement : Creating placement databases -------------------------------- 1.97s 2025-06-01 05:02:33.333417 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.45s 2025-06-01 05:02:33.333441 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.42s 2025-06-01 05:02:33.333457 | orchestrator | placement : Copying over config.json files for services ----------------- 1.32s 2025-06-01 05:02:33.333473 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2025-06-01 05:02:33.333488 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.25s 2025-06-01 05:02:33.333503 | orchestrator | placement : include_tasks ----------------------------------------------- 0.64s 2025-06-01 05:02:33.333518 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2025-06-01 05:02:33.333534 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.60s 2025-06-01 05:02:33.333550 | orchestrator | 2025-06-01 05:02:33 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:33.333792 | orchestrator | 2025-06-01 05:02:33 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:33.333827 | orchestrator | 2025-06-01 05:02:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:36.385595 | orchestrator | 2025-06-01 05:02:36 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:36.386309 | orchestrator | 2025-06-01 05:02:36 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:36.386787 | orchestrator | 2025-06-01 05:02:36 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:36.387753 | orchestrator | 2025-06-01 05:02:36 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:36.387813 | orchestrator | 2025-06-01 05:02:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:39.419342 | orchestrator | 2025-06-01 05:02:39 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:39.419944 | orchestrator | 2025-06-01 05:02:39 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state STARTED 2025-06-01 05:02:39.420694 | orchestrator | 2025-06-01 05:02:39 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:39.421469 | orchestrator | 2025-06-01 05:02:39 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state STARTED 2025-06-01 05:02:39.421847 | orchestrator | 2025-06-01 05:02:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:42.477508 | orchestrator | 2025-06-01 05:02:42 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:02:42.481795 | orchestrator | 2025-06-01 05:02:42 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:42.486768 | orchestrator | 2025-06-01 05:02:42 | INFO  | Task 8d06218f-440e-44f8-b68c-31512bf85ad2 is in state SUCCESS 2025-06-01 05:02:42.487625 | orchestrator | 2025-06-01 05:02:42.491605 | orchestrator | 2025-06-01 05:02:42.492027 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:02:42.492067 | orchestrator | 2025-06-01 05:02:42.492090 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:02:42.492111 | orchestrator | Sunday 01 June 2025 04:58:21 +0000 (0:00:00.194) 0:00:00.194 *********** 2025-06-01 05:02:42.492137 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:42.492165 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:42.492185 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:42.492205 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:02:42.492226 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:02:42.492246 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:02:42.492260 | orchestrator | 2025-06-01 05:02:42.492271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:02:42.492283 | orchestrator | Sunday 01 June 2025 04:58:21 +0000 (0:00:00.549) 0:00:00.744 *********** 2025-06-01 05:02:42.492294 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-01 05:02:42.492305 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-01 05:02:42.492316 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-01 05:02:42.492327 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-01 05:02:42.492338 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-01 05:02:42.492349 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-01 05:02:42.492359 | orchestrator | 2025-06-01 05:02:42.492370 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-01 05:02:42.492381 | orchestrator | 2025-06-01 05:02:42.492392 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 05:02:42.492403 | orchestrator | Sunday 01 June 2025 04:58:22 +0000 (0:00:00.530) 0:00:01.274 *********** 2025-06-01 05:02:42.492448 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 05:02:42.492470 | orchestrator | 2025-06-01 05:02:42.492486 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-01 05:02:42.492504 | orchestrator | Sunday 01 June 2025 04:58:23 +0000 (0:00:01.054) 0:00:02.329 *********** 2025-06-01 05:02:42.492520 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:42.492538 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:42.492555 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:02:42.492573 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:42.492591 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:02:42.492608 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:02:42.492625 | orchestrator | 2025-06-01 05:02:42.492642 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-01 05:02:42.492659 | orchestrator | Sunday 01 June 2025 04:58:24 +0000 (0:00:01.127) 0:00:03.456 *********** 2025-06-01 05:02:42.492676 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:42.492715 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:42.492735 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:42.492753 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:02:42.492772 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:02:42.492789 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:02:42.492807 | orchestrator | 2025-06-01 05:02:42.492827 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-01 05:02:42.492846 | orchestrator | Sunday 01 June 2025 04:58:25 +0000 (0:00:01.099) 0:00:04.556 *********** 2025-06-01 05:02:42.492865 | orchestrator | ok: [testbed-node-0] => { 2025-06-01 05:02:42.492884 | orchestrator |  "changed": false, 2025-06-01 05:02:42.492951 | orchestrator |  "msg": "All assertions passed" 2025-06-01 05:02:42.492973 | orchestrator | } 2025-06-01 05:02:42.492992 | orchestrator | ok: [testbed-node-1] => { 2025-06-01 05:02:42.493011 | orchestrator |  "changed": false, 2025-06-01 05:02:42.493030 | orchestrator |  "msg": "All assertions passed" 2025-06-01 05:02:42.493049 | orchestrator | } 2025-06-01 05:02:42.493068 | orchestrator | ok: [testbed-node-2] => { 2025-06-01 05:02:42.493087 | orchestrator |  "changed": false, 2025-06-01 05:02:42.493105 | orchestrator |  "msg": "All assertions passed" 2025-06-01 05:02:42.493123 | orchestrator | } 2025-06-01 05:02:42.493141 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 05:02:42.493159 | orchestrator |  "changed": false, 2025-06-01 05:02:42.493178 | orchestrator |  "msg": "All assertions passed" 2025-06-01 05:02:42.493197 | orchestrator | } 2025-06-01 05:02:42.493215 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 05:02:42.493233 | orchestrator |  "changed": false, 2025-06-01 05:02:42.493252 | orchestrator |  "msg": "All assertions passed" 2025-06-01 05:02:42.493271 | orchestrator | } 2025-06-01 05:02:42.493290 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 05:02:42.493307 | orchestrator |  "changed": false, 2025-06-01 05:02:42.493324 | orchestrator |  "msg": "All assertions passed" 2025-06-01 05:02:42.493342 | orchestrator | } 2025-06-01 05:02:42.493359 | orchestrator | 2025-06-01 05:02:42.493377 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-01 05:02:42.493395 | orchestrator | Sunday 01 June 2025 04:58:26 +0000 (0:00:00.789) 0:00:05.346 *********** 2025-06-01 05:02:42.493415 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.493434 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.493452 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.493470 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.493489 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.493507 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.493526 | orchestrator | 2025-06-01 05:02:42.493566 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-01 05:02:42.493588 | orchestrator | Sunday 01 June 2025 04:58:27 +0000 (0:00:00.626) 0:00:05.972 *********** 2025-06-01 05:02:42.493608 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-01 05:02:42.493650 | orchestrator | 2025-06-01 05:02:42.493670 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-01 05:02:42.493690 | orchestrator | Sunday 01 June 2025 04:58:30 +0000 (0:00:02.887) 0:00:08.860 *********** 2025-06-01 05:02:42.493709 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-01 05:02:42.493730 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-01 05:02:42.493750 | orchestrator | 2025-06-01 05:02:42.493792 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-01 05:02:42.493813 | orchestrator | Sunday 01 June 2025 04:58:35 +0000 (0:00:05.740) 0:00:14.601 *********** 2025-06-01 05:02:42.493833 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:02:42.493853 | orchestrator | 2025-06-01 05:02:42.493873 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-01 05:02:42.493920 | orchestrator | Sunday 01 June 2025 04:58:38 +0000 (0:00:03.057) 0:00:17.658 *********** 2025-06-01 05:02:42.493940 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:02:42.493959 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-01 05:02:42.493979 | orchestrator | 2025-06-01 05:02:42.493999 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-01 05:02:42.494083 | orchestrator | Sunday 01 June 2025 04:58:42 +0000 (0:00:03.570) 0:00:21.228 *********** 2025-06-01 05:02:42.494109 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:02:42.494128 | orchestrator | 2025-06-01 05:02:42.494147 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-01 05:02:42.494166 | orchestrator | Sunday 01 June 2025 04:58:45 +0000 (0:00:03.150) 0:00:24.379 *********** 2025-06-01 05:02:42.494184 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-01 05:02:42.494203 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-01 05:02:42.494221 | orchestrator | 2025-06-01 05:02:42.494241 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 05:02:42.494261 | orchestrator | Sunday 01 June 2025 04:58:52 +0000 (0:00:07.101) 0:00:31.480 *********** 2025-06-01 05:02:42.494282 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.494303 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.494324 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.494344 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.494364 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.494384 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.494404 | orchestrator | 2025-06-01 05:02:42.494423 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-01 05:02:42.494444 | orchestrator | Sunday 01 June 2025 04:58:53 +0000 (0:00:00.835) 0:00:32.316 *********** 2025-06-01 05:02:42.494465 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.494485 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.494505 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.494525 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.494545 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.494565 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.494586 | orchestrator | 2025-06-01 05:02:42.494607 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-01 05:02:42.494628 | orchestrator | Sunday 01 June 2025 04:58:55 +0000 (0:00:02.200) 0:00:34.517 *********** 2025-06-01 05:02:42.494648 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:42.494669 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:42.494690 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:42.494708 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:02:42.494728 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:02:42.494749 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:02:42.494769 | orchestrator | 2025-06-01 05:02:42.494806 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-01 05:02:42.494829 | orchestrator | Sunday 01 June 2025 04:58:56 +0000 (0:00:01.157) 0:00:35.674 *********** 2025-06-01 05:02:42.494849 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.494870 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.494919 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.494941 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.494960 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.494979 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.494998 | orchestrator | 2025-06-01 05:02:42.495018 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-01 05:02:42.495038 | orchestrator | Sunday 01 June 2025 04:58:59 +0000 (0:00:02.443) 0:00:38.117 *********** 2025-06-01 05:02:42.495076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.495136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.495160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.495181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.495219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.495240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.495261 | orchestrator | 2025-06-01 05:02:42.495290 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-01 05:02:42.495311 | orchestrator | Sunday 01 June 2025 04:59:02 +0000 (0:00:03.231) 0:00:41.349 *********** 2025-06-01 05:02:42.495331 | orchestrator | [WARNING]: Skipped 2025-06-01 05:02:42.495352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-01 05:02:42.495372 | orchestrator | due to this access issue: 2025-06-01 05:02:42.495393 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-01 05:02:42.495413 | orchestrator | a directory 2025-06-01 05:02:42.495432 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 05:02:42.495452 | orchestrator | 2025-06-01 05:02:42.495470 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 05:02:42.495499 | orchestrator | Sunday 01 June 2025 04:59:03 +0000 (0:00:00.875) 0:00:42.224 *********** 2025-06-01 05:02:42.495519 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 05:02:42.495540 | orchestrator | 2025-06-01 05:02:42.495559 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-01 05:02:42.495579 | orchestrator | Sunday 01 June 2025 04:59:04 +0000 (0:00:01.357) 0:00:43.582 *********** 2025-06-01 05:02:42.495600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.495634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.495655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.495699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.495735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.495757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.495791 | orchestrator | 2025-06-01 05:02:42.495812 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-01 05:02:42.495832 | orchestrator | Sunday 01 June 2025 04:59:08 +0000 (0:00:03.473) 0:00:47.055 *********** 2025-06-01 05:02:42.495854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.495875 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.495939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.495963 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.495993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.496016 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.496053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.496075 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.496097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.496132 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.496155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.496175 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.496196 | orchestrator | 2025-06-01 05:02:42.496217 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-01 05:02:42.496238 | orchestrator | Sunday 01 June 2025 04:59:10 +0000 (0:00:02.378) 0:00:49.434 *********** 2025-06-01 05:02:42.496260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.496281 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.496320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.496345 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.496366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.496399 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.496422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.496443 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.496465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.496487 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.496525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.496547 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.496567 | orchestrator | 2025-06-01 05:02:42.496589 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-01 05:02:42.496609 | orchestrator | Sunday 01 June 2025 04:59:14 +0000 (0:00:03.417) 0:00:52.851 *********** 2025-06-01 05:02:42.496630 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.496651 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.496672 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.496693 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.496714 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.496734 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.496765 | orchestrator | 2025-06-01 05:02:42.496787 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-01 05:02:42.496817 | orchestrator | Sunday 01 June 2025 04:59:17 +0000 (0:00:03.070) 0:00:55.921 *********** 2025-06-01 05:02:42.496837 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.496855 | orchestrator | 2025-06-01 05:02:42.496873 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-01 05:02:42.496888 | orchestrator | Sunday 01 June 2025 04:59:17 +0000 (0:00:00.112) 0:00:56.033 *********** 2025-06-01 05:02:42.496939 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.496958 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.496975 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.496990 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.497007 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.497022 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.497038 | orchestrator | 2025-06-01 05:02:42.497054 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-01 05:02:42.497071 | orchestrator | Sunday 01 June 2025 04:59:17 +0000 (0:00:00.635) 0:00:56.669 *********** 2025-06-01 05:02:42.497089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.497108 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.497128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.497147 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.497164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.497196 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.497236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.497257 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.497275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.497293 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.497310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.497329 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.497347 | orchestrator | 2025-06-01 05:02:42.497365 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-01 05:02:42.497384 | orchestrator | Sunday 01 June 2025 04:59:20 +0000 (0:00:02.428) 0:00:59.098 *********** 2025-06-01 05:02:42.497402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.497431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.497472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.497489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.497504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.497520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.497535 | orchestrator | 2025-06-01 05:02:42.497551 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-01 05:02:42.497567 | orchestrator | Sunday 01 June 2025 04:59:23 +0000 (0:00:03.224) 0:01:02.322 *********** 2025-06-01 05:02:42.497599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.497629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.497647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.497663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.497681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.497714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.497732 | orchestrator | 2025-06-01 05:02:42.497748 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-01 05:02:42.497764 | orchestrator | Sunday 01 June 2025 04:59:31 +0000 (0:00:07.748) 0:01:10.071 *********** 2025-06-01 05:02:42.497793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.497811 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.497827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.497844 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.497861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.497878 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.497970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.498011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.498082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.498099 | orchestrator | 2025-06-01 05:02:42.498116 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-01 05:02:42.498133 | orchestrator | Sunday 01 June 2025 04:59:36 +0000 (0:00:05.387) 0:01:15.459 *********** 2025-06-01 05:02:42.498150 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498167 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498183 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498200 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.498217 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.498235 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.498250 | orchestrator | 2025-06-01 05:02:42.498263 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-01 05:02:42.498276 | orchestrator | Sunday 01 June 2025 04:59:39 +0000 (0:00:02.944) 0:01:18.403 *********** 2025-06-01 05:02:42.498289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.498315 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.498346 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.498382 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.498425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.498439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.498461 | orchestrator | 2025-06-01 05:02:42.498475 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-01 05:02:42.498489 | orchestrator | Sunday 01 June 2025 04:59:43 +0000 (0:00:04.264) 0:01:22.668 *********** 2025-06-01 05:02:42.498503 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.498517 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498532 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.498545 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498559 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.498573 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498588 | orchestrator | 2025-06-01 05:02:42.498603 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-01 05:02:42.498614 | orchestrator | Sunday 01 June 2025 04:59:46 +0000 (0:00:02.607) 0:01:25.275 *********** 2025-06-01 05:02:42.498622 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.498630 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.498638 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.498646 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498654 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498662 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498670 | orchestrator | 2025-06-01 05:02:42.498677 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-01 05:02:42.498685 | orchestrator | Sunday 01 June 2025 04:59:49 +0000 (0:00:02.634) 0:01:27.910 *********** 2025-06-01 05:02:42.498693 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.498701 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.498709 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498716 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.498724 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498732 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498740 | orchestrator | 2025-06-01 05:02:42.498747 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-01 05:02:42.498755 | orchestrator | Sunday 01 June 2025 04:59:50 +0000 (0:00:01.734) 0:01:29.644 *********** 2025-06-01 05:02:42.498763 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.498782 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.498790 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498798 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498806 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.498814 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498822 | orchestrator | 2025-06-01 05:02:42.498829 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-01 05:02:42.498837 | orchestrator | Sunday 01 June 2025 04:59:53 +0000 (0:00:02.719) 0:01:32.363 *********** 2025-06-01 05:02:42.498845 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.498853 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.498861 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.498869 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498877 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.498885 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.498915 | orchestrator | 2025-06-01 05:02:42.498930 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-01 05:02:42.498939 | orchestrator | Sunday 01 June 2025 04:59:55 +0000 (0:00:01.841) 0:01:34.205 *********** 2025-06-01 05:02:42.498947 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.498955 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.498963 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.498978 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.498989 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.499002 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.499016 | orchestrator | 2025-06-01 05:02:42.499037 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-01 05:02:42.499050 | orchestrator | Sunday 01 June 2025 04:59:57 +0000 (0:00:02.227) 0:01:36.432 *********** 2025-06-01 05:02:42.499063 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 05:02:42.499076 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.499088 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 05:02:42.499101 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.499114 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 05:02:42.499126 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.499138 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 05:02:42.499151 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.499164 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 05:02:42.499175 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.499186 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 05:02:42.499199 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.499212 | orchestrator | 2025-06-01 05:02:42.499225 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-01 05:02:42.499238 | orchestrator | Sunday 01 June 2025 04:59:59 +0000 (0:00:02.343) 0:01:38.776 *********** 2025-06-01 05:02:42.499252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.499267 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.499280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.499293 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.499315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.499349 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.499427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.499439 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.499447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.499455 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.499463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.499471 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.499479 | orchestrator | 2025-06-01 05:02:42.499487 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-01 05:02:42.499495 | orchestrator | Sunday 01 June 2025 05:00:02 +0000 (0:00:02.938) 0:01:41.715 *********** 2025-06-01 05:02:42.499509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.499525 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.499542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.499550 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.499558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.499567 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.499575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.499583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.499592 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.499599 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.499617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.499626 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.499634 | orchestrator | 2025-06-01 05:02:42.499642 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-01 05:02:42.499787 | orchestrator | Sunday 01 June 2025 05:00:04 +0000 (0:00:02.013) 0:01:43.729 *********** 2025-06-01 05:02:42.499796 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.499804 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.499811 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.499819 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.499847 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.499862 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.499870 | orchestrator | 2025-06-01 05:02:42.499879 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-01 05:02:42.499887 | orchestrator | Sunday 01 June 2025 05:00:07 +0000 (0:00:02.556) 0:01:46.285 *********** 2025-06-01 05:02:42.499950 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.499959 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.499967 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.499975 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:02:42.499983 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:02:42.499991 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:02:42.499999 | orchestrator | 2025-06-01 05:02:42.500007 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-01 05:02:42.500014 | orchestrator | Sunday 01 June 2025 05:00:11 +0000 (0:00:03.896) 0:01:50.181 *********** 2025-06-01 05:02:42.500022 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500030 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500038 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500046 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500053 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500061 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500069 | orchestrator | 2025-06-01 05:02:42.500077 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-01 05:02:42.500085 | orchestrator | Sunday 01 June 2025 05:00:15 +0000 (0:00:04.496) 0:01:54.678 *********** 2025-06-01 05:02:42.500093 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500101 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500109 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500117 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500124 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500132 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500140 | orchestrator | 2025-06-01 05:02:42.500148 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-01 05:02:42.500156 | orchestrator | Sunday 01 June 2025 05:00:18 +0000 (0:00:02.653) 0:01:57.331 *********** 2025-06-01 05:02:42.500164 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500171 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500179 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500187 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500194 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500202 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500217 | orchestrator | 2025-06-01 05:02:42.500225 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-01 05:02:42.500233 | orchestrator | Sunday 01 June 2025 05:00:21 +0000 (0:00:02.857) 0:02:00.189 *********** 2025-06-01 05:02:42.500241 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500248 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500256 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500264 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500272 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500280 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500288 | orchestrator | 2025-06-01 05:02:42.500296 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-01 05:02:42.500304 | orchestrator | Sunday 01 June 2025 05:00:25 +0000 (0:00:04.155) 0:02:04.345 *********** 2025-06-01 05:02:42.500311 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500319 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500327 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500355 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500368 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500380 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500393 | orchestrator | 2025-06-01 05:02:42.500406 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-01 05:02:42.500417 | orchestrator | Sunday 01 June 2025 05:00:30 +0000 (0:00:05.234) 0:02:09.579 *********** 2025-06-01 05:02:42.500455 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500467 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500506 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500519 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500531 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500543 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500555 | orchestrator | 2025-06-01 05:02:42.500568 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-01 05:02:42.500579 | orchestrator | Sunday 01 June 2025 05:00:33 +0000 (0:00:02.778) 0:02:12.357 *********** 2025-06-01 05:02:42.500590 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.500601 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500612 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.500624 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.500636 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.500649 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.500661 | orchestrator | 2025-06-01 05:02:42.500800 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-01 05:02:42.500815 | orchestrator | Sunday 01 June 2025 05:00:36 +0000 (0:00:02.575) 0:02:14.933 *********** 2025-06-01 05:02:42.500828 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.500842 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.501025 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.501045 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.501117 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.501135 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.501149 | orchestrator | 2025-06-01 05:02:42.501162 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-01 05:02:42.501176 | orchestrator | Sunday 01 June 2025 05:00:38 +0000 (0:00:02.392) 0:02:17.325 *********** 2025-06-01 05:02:42.501189 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 05:02:42.501204 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.501217 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 05:02:42.501231 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.502415 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 05:02:42.502501 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.502546 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 05:02:42.502559 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.502570 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 05:02:42.502581 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.502592 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 05:02:42.502603 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.502615 | orchestrator | 2025-06-01 05:02:42.502626 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-01 05:02:42.502637 | orchestrator | Sunday 01 June 2025 05:00:42 +0000 (0:00:03.838) 0:02:21.164 *********** 2025-06-01 05:02:42.502651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.502666 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.502678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.502690 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.502701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 05:02:42.502712 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.502753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.502775 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.502787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.502798 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.502809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 05:02:42.502824 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.502844 | orchestrator | 2025-06-01 05:02:42.502864 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-01 05:02:42.502882 | orchestrator | Sunday 01 June 2025 05:00:45 +0000 (0:00:03.102) 0:02:24.266 *********** 2025-06-01 05:02:42.502944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.502972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.503020 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.503040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.503058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 05:02:42.503077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 05:02:42.503098 | orchestrator | 2025-06-01 05:02:42.503118 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 05:02:42.503136 | orchestrator | Sunday 01 June 2025 05:00:49 +0000 (0:00:03.733) 0:02:27.999 *********** 2025-06-01 05:02:42.503155 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.503167 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.503178 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.503189 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:02:42.503200 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:02:42.503218 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:02:42.503229 | orchestrator | 2025-06-01 05:02:42.503240 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-01 05:02:42.503251 | orchestrator | Sunday 01 June 2025 05:00:49 +0000 (0:00:00.569) 0:02:28.569 *********** 2025-06-01 05:02:42.503262 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.503272 | orchestrator | 2025-06-01 05:02:42.503283 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-01 05:02:42.503294 | orchestrator | Sunday 01 June 2025 05:00:51 +0000 (0:00:01.931) 0:02:30.500 *********** 2025-06-01 05:02:42.503305 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.503316 | orchestrator | 2025-06-01 05:02:42.503340 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-01 05:02:42.503352 | orchestrator | Sunday 01 June 2025 05:00:53 +0000 (0:00:02.006) 0:02:32.507 *********** 2025-06-01 05:02:42.503363 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.503374 | orchestrator | 2025-06-01 05:02:42.503385 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 05:02:42.503396 | orchestrator | Sunday 01 June 2025 05:01:34 +0000 (0:00:40.793) 0:03:13.301 *********** 2025-06-01 05:02:42.503406 | orchestrator | 2025-06-01 05:02:42.503417 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 05:02:42.503428 | orchestrator | Sunday 01 June 2025 05:01:34 +0000 (0:00:00.077) 0:03:13.378 *********** 2025-06-01 05:02:42.503439 | orchestrator | 2025-06-01 05:02:42.503450 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 05:02:42.503469 | orchestrator | Sunday 01 June 2025 05:01:34 +0000 (0:00:00.267) 0:03:13.646 *********** 2025-06-01 05:02:42.503480 | orchestrator | 2025-06-01 05:02:42.503491 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 05:02:42.503502 | orchestrator | Sunday 01 June 2025 05:01:34 +0000 (0:00:00.063) 0:03:13.709 *********** 2025-06-01 05:02:42.503513 | orchestrator | 2025-06-01 05:02:42.503524 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 05:02:42.503535 | orchestrator | Sunday 01 June 2025 05:01:34 +0000 (0:00:00.064) 0:03:13.773 *********** 2025-06-01 05:02:42.503546 | orchestrator | 2025-06-01 05:02:42.503557 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 05:02:42.503567 | orchestrator | Sunday 01 June 2025 05:01:35 +0000 (0:00:00.063) 0:03:13.837 *********** 2025-06-01 05:02:42.503578 | orchestrator | 2025-06-01 05:02:42.503589 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-01 05:02:42.503600 | orchestrator | Sunday 01 June 2025 05:01:35 +0000 (0:00:00.065) 0:03:13.903 *********** 2025-06-01 05:02:42.503610 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.503621 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.503632 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.503643 | orchestrator | 2025-06-01 05:02:42.503654 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-01 05:02:42.503664 | orchestrator | Sunday 01 June 2025 05:02:07 +0000 (0:00:32.370) 0:03:46.273 *********** 2025-06-01 05:02:42.503675 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:02:42.503686 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:02:42.503697 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:02:42.503708 | orchestrator | 2025-06-01 05:02:42.503719 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:02:42.503731 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 05:02:42.503743 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-01 05:02:42.503754 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-01 05:02:42.503772 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-01 05:02:42.503783 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-01 05:02:42.503794 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-01 05:02:42.503805 | orchestrator | 2025-06-01 05:02:42.503816 | orchestrator | 2025-06-01 05:02:42.503827 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:02:42.503838 | orchestrator | Sunday 01 June 2025 05:02:39 +0000 (0:00:31.671) 0:04:17.944 *********** 2025-06-01 05:02:42.503849 | orchestrator | =============================================================================== 2025-06-01 05:02:42.503859 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.79s 2025-06-01 05:02:42.503870 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.37s 2025-06-01 05:02:42.503881 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 31.67s 2025-06-01 05:02:42.503918 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.75s 2025-06-01 05:02:42.503931 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.10s 2025-06-01 05:02:42.503943 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.74s 2025-06-01 05:02:42.503954 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 5.39s 2025-06-01 05:02:42.503965 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 5.23s 2025-06-01 05:02:42.503976 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.50s 2025-06-01 05:02:42.503987 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.26s 2025-06-01 05:02:42.503997 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.16s 2025-06-01 05:02:42.504008 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.90s 2025-06-01 05:02:42.504019 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.84s 2025-06-01 05:02:42.504035 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.73s 2025-06-01 05:02:42.504046 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.57s 2025-06-01 05:02:42.504057 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.47s 2025-06-01 05:02:42.504068 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.42s 2025-06-01 05:02:42.504079 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.23s 2025-06-01 05:02:42.504090 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.22s 2025-06-01 05:02:42.504100 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.15s 2025-06-01 05:02:42.504118 | orchestrator | 2025-06-01 05:02:42 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:42.504130 | orchestrator | 2025-06-01 05:02:42.504142 | orchestrator | 2025-06-01 05:02:42.504152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:02:42.504163 | orchestrator | 2025-06-01 05:02:42.504177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:02:42.504195 | orchestrator | Sunday 01 June 2025 04:59:47 +0000 (0:00:00.653) 0:00:00.653 *********** 2025-06-01 05:02:42.504213 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:02:42.504233 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:02:42.504252 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:02:42.504271 | orchestrator | 2025-06-01 05:02:42.504283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:02:42.504303 | orchestrator | Sunday 01 June 2025 04:59:48 +0000 (0:00:00.626) 0:00:01.280 *********** 2025-06-01 05:02:42.504314 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-01 05:02:42.504325 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-01 05:02:42.504336 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-01 05:02:42.504347 | orchestrator | 2025-06-01 05:02:42.504357 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-01 05:02:42.504368 | orchestrator | 2025-06-01 05:02:42.504379 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 05:02:42.504390 | orchestrator | Sunday 01 June 2025 04:59:48 +0000 (0:00:00.714) 0:00:01.994 *********** 2025-06-01 05:02:42.504401 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:02:42.504412 | orchestrator | 2025-06-01 05:02:42.504423 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-01 05:02:42.504434 | orchestrator | Sunday 01 June 2025 04:59:49 +0000 (0:00:00.471) 0:00:02.465 *********** 2025-06-01 05:02:42.504445 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-01 05:02:42.504456 | orchestrator | 2025-06-01 05:02:42.504467 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-01 05:02:42.504478 | orchestrator | Sunday 01 June 2025 04:59:52 +0000 (0:00:03.156) 0:00:05.621 *********** 2025-06-01 05:02:42.504488 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-01 05:02:42.504499 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-01 05:02:42.504510 | orchestrator | 2025-06-01 05:02:42.504521 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-01 05:02:42.504533 | orchestrator | Sunday 01 June 2025 04:59:58 +0000 (0:00:05.750) 0:00:11.372 *********** 2025-06-01 05:02:42.504544 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:02:42.504555 | orchestrator | 2025-06-01 05:02:42.504566 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-01 05:02:42.504577 | orchestrator | Sunday 01 June 2025 05:00:01 +0000 (0:00:03.214) 0:00:14.586 *********** 2025-06-01 05:02:42.504588 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:02:42.504598 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-01 05:02:42.504609 | orchestrator | 2025-06-01 05:02:42.504620 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-01 05:02:42.504630 | orchestrator | Sunday 01 June 2025 05:00:05 +0000 (0:00:03.856) 0:00:18.443 *********** 2025-06-01 05:02:42.504641 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:02:42.504652 | orchestrator | 2025-06-01 05:02:42.504663 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-01 05:02:42.504673 | orchestrator | Sunday 01 June 2025 05:00:08 +0000 (0:00:03.614) 0:00:22.058 *********** 2025-06-01 05:02:42.504684 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-01 05:02:42.504695 | orchestrator | 2025-06-01 05:02:42.504706 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-01 05:02:42.504716 | orchestrator | Sunday 01 June 2025 05:00:13 +0000 (0:00:04.904) 0:00:26.963 *********** 2025-06-01 05:02:42.504735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.504767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.504780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.504792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.504986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505056 | orchestrator | 2025-06-01 05:02:42.505067 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-01 05:02:42.505078 | orchestrator | Sunday 01 June 2025 05:00:17 +0000 (0:00:04.107) 0:00:31.070 *********** 2025-06-01 05:02:42.505089 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.505100 | orchestrator | 2025-06-01 05:02:42.505111 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-01 05:02:42.505122 | orchestrator | Sunday 01 June 2025 05:00:18 +0000 (0:00:00.303) 0:00:31.375 *********** 2025-06-01 05:02:42.505132 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.505143 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.505154 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.505173 | orchestrator | 2025-06-01 05:02:42.505184 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 05:02:42.505195 | orchestrator | Sunday 01 June 2025 05:00:18 +0000 (0:00:00.293) 0:00:31.668 *********** 2025-06-01 05:02:42.505205 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:02:42.505216 | orchestrator | 2025-06-01 05:02:42.505228 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-01 05:02:42.505239 | orchestrator | Sunday 01 June 2025 05:00:19 +0000 (0:00:00.795) 0:00:32.464 *********** 2025-06-01 05:02:42.505255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.505274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.505286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.505298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.505519 | orchestrator | 2025-06-01 05:02:42.505530 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-01 05:02:42.505541 | orchestrator | Sunday 01 June 2025 05:00:25 +0000 (0:00:06.680) 0:00:39.144 *********** 2025-06-01 05:02:42.505552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.505574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.505587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505640 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.505652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.505669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.505688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505741 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.505752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.505768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.505789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505842 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.505853 | orchestrator | 2025-06-01 05:02:42.505864 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-01 05:02:42.505875 | orchestrator | Sunday 01 June 2025 05:00:29 +0000 (0:00:03.860) 0:00:43.005 *********** 2025-06-01 05:02:42.505887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.505960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.505981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.505993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506105 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.506126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.506154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.506186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506440 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.506452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.506464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.506482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.506539 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.506550 | orchestrator | 2025-06-01 05:02:42.506593 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-01 05:02:42.506607 | orchestrator | Sunday 01 June 2025 05:00:31 +0000 (0:00:01.526) 0:00:44.531 *********** 2025-06-01 05:02:42.506619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.506636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.506648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.506671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.506983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507067 | orchestrator | 2025-06-01 05:02:42.507079 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-01 05:02:42.507090 | orchestrator | Sunday 01 June 2025 05:00:38 +0000 (0:00:07.041) 0:00:51.573 *********** 2025-06-01 05:02:42.507142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.507157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.507174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.507194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507482 | orchestrator | 2025-06-01 05:02:42.507500 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-01 05:02:42.507517 | orchestrator | Sunday 01 June 2025 05:00:57 +0000 (0:00:18.717) 0:01:10.291 *********** 2025-06-01 05:02:42.507535 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-01 05:02:42.507553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-01 05:02:42.507571 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-01 05:02:42.507589 | orchestrator | 2025-06-01 05:02:42.507610 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-01 05:02:42.507631 | orchestrator | Sunday 01 June 2025 05:01:01 +0000 (0:00:04.152) 0:01:14.443 *********** 2025-06-01 05:02:42.507651 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-01 05:02:42.507670 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-01 05:02:42.507690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-01 05:02:42.507708 | orchestrator | 2025-06-01 05:02:42.507729 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-01 05:02:42.507761 | orchestrator | Sunday 01 June 2025 05:01:03 +0000 (0:00:02.446) 0:01:16.890 *********** 2025-06-01 05:02:42.507782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.507812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.507847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.507868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.507944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.507957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.507979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.507996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508137 | orchestrator | 2025-06-01 05:02:42.508149 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-01 05:02:42.508160 | orchestrator | Sunday 01 June 2025 05:01:07 +0000 (0:00:03.336) 0:01:20.226 *********** 2025-06-01 05:02:42.508180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.508192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.508217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.508230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.508531 | orchestrator | 2025-06-01 05:02:42.508550 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 05:02:42.508571 | orchestrator | Sunday 01 June 2025 05:01:09 +0000 (0:00:02.927) 0:01:23.153 *********** 2025-06-01 05:02:42.508590 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.508611 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.508625 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.508636 | orchestrator | 2025-06-01 05:02:42.508647 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-01 05:02:42.508658 | orchestrator | Sunday 01 June 2025 05:01:10 +0000 (0:00:00.418) 0:01:23.571 *********** 2025-06-01 05:02:42.508678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.508700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.508712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.508770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.508800 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.508812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.508863 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.508875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 05:02:42.508974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 05:02:42.508990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.509008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.509020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.509032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:02:42.509043 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.509054 | orchestrator | 2025-06-01 05:02:42.509065 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-01 05:02:42.509077 | orchestrator | Sunday 01 June 2025 05:01:10 +0000 (0:00:00.640) 0:01:24.212 *********** 2025-06-01 05:02:42.509089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.509116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.509128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 05:02:42.509149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:02:42.509360 | orchestrator | 2025-06-01 05:02:42.509371 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 05:02:42.509381 | orchestrator | Sunday 01 June 2025 05:01:16 +0000 (0:00:05.157) 0:01:29.369 *********** 2025-06-01 05:02:42.509397 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:02:42.509407 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:02:42.509416 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:02:42.509426 | orchestrator | 2025-06-01 05:02:42.509436 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-01 05:02:42.509446 | orchestrator | Sunday 01 June 2025 05:01:16 +0000 (0:00:00.352) 0:01:29.722 *********** 2025-06-01 05:02:42.509456 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-01 05:02:42.509466 | orchestrator | 2025-06-01 05:02:42.509476 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-01 05:02:42.509486 | orchestrator | Sunday 01 June 2025 05:01:19 +0000 (0:00:02.927) 0:01:32.650 *********** 2025-06-01 05:02:42.509495 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 05:02:42.509505 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-01 05:02:42.509515 | orchestrator | 2025-06-01 05:02:42.509524 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-01 05:02:42.509534 | orchestrator | Sunday 01 June 2025 05:01:21 +0000 (0:00:02.080) 0:01:34.731 *********** 2025-06-01 05:02:42.509544 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.509554 | orchestrator | 2025-06-01 05:02:42.509564 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-01 05:02:42.509574 | orchestrator | Sunday 01 June 2025 05:01:36 +0000 (0:00:15.238) 0:01:49.969 *********** 2025-06-01 05:02:42.509583 | orchestrator | 2025-06-01 05:02:42.509593 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-01 05:02:42.509603 | orchestrator | Sunday 01 June 2025 05:01:36 +0000 (0:00:00.070) 0:01:50.039 *********** 2025-06-01 05:02:42.509613 | orchestrator | 2025-06-01 05:02:42.509622 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-01 05:02:42.509632 | orchestrator | Sunday 01 June 2025 05:01:36 +0000 (0:00:00.091) 0:01:50.131 *********** 2025-06-01 05:02:42.509642 | orchestrator | 2025-06-01 05:02:42.509652 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-01 05:02:42.509662 | orchestrator | Sunday 01 June 2025 05:01:36 +0000 (0:00:00.065) 0:01:50.196 *********** 2025-06-01 05:02:42.509677 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.509687 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.509697 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.509707 | orchestrator | 2025-06-01 05:02:42.509717 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-01 05:02:42.509726 | orchestrator | Sunday 01 June 2025 05:01:47 +0000 (0:00:10.404) 0:02:00.601 *********** 2025-06-01 05:02:42.509736 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.509746 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.509756 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.509765 | orchestrator | 2025-06-01 05:02:42.509775 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-01 05:02:42.509785 | orchestrator | Sunday 01 June 2025 05:01:53 +0000 (0:00:05.835) 0:02:06.437 *********** 2025-06-01 05:02:42.509795 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.509805 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.509815 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.509824 | orchestrator | 2025-06-01 05:02:42.509834 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-01 05:02:42.509844 | orchestrator | Sunday 01 June 2025 05:02:00 +0000 (0:00:07.378) 0:02:13.815 *********** 2025-06-01 05:02:42.509854 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.509863 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.509873 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.509883 | orchestrator | 2025-06-01 05:02:42.509913 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-01 05:02:42.509925 | orchestrator | Sunday 01 June 2025 05:02:09 +0000 (0:00:09.311) 0:02:23.126 *********** 2025-06-01 05:02:42.509942 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.509952 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.509961 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.509971 | orchestrator | 2025-06-01 05:02:42.509981 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-01 05:02:42.509991 | orchestrator | Sunday 01 June 2025 05:02:23 +0000 (0:00:13.247) 0:02:36.374 *********** 2025-06-01 05:02:42.510001 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.510010 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:02:42.510056 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:02:42.510066 | orchestrator | 2025-06-01 05:02:42.510076 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-01 05:02:42.510086 | orchestrator | Sunday 01 June 2025 05:02:33 +0000 (0:00:10.649) 0:02:47.023 *********** 2025-06-01 05:02:42.510095 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:02:42.510105 | orchestrator | 2025-06-01 05:02:42.510116 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:02:42.510127 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 05:02:42.510138 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 05:02:42.510148 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 05:02:42.510158 | orchestrator | 2025-06-01 05:02:42.510168 | orchestrator | 2025-06-01 05:02:42.510178 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:02:42.510221 | orchestrator | Sunday 01 June 2025 05:02:41 +0000 (0:00:07.215) 0:02:54.239 *********** 2025-06-01 05:02:42.510233 | orchestrator | =============================================================================== 2025-06-01 05:02:42.510243 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.72s 2025-06-01 05:02:42.510253 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.24s 2025-06-01 05:02:42.510263 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.25s 2025-06-01 05:02:42.510272 | orchestrator | designate : Restart designate-worker container ------------------------- 10.65s 2025-06-01 05:02:42.510282 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.40s 2025-06-01 05:02:42.510292 | orchestrator | designate : Restart designate-producer container ------------------------ 9.31s 2025-06-01 05:02:42.510302 | orchestrator | designate : Restart designate-central container ------------------------- 7.38s 2025-06-01 05:02:42.510311 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.22s 2025-06-01 05:02:42.510321 | orchestrator | designate : Copying over config.json files for services ----------------- 7.04s 2025-06-01 05:02:42.510331 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.68s 2025-06-01 05:02:42.510340 | orchestrator | designate : Restart designate-api container ----------------------------- 5.84s 2025-06-01 05:02:42.510350 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.75s 2025-06-01 05:02:42.510360 | orchestrator | designate : Check designate containers ---------------------------------- 5.16s 2025-06-01 05:02:42.510370 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.91s 2025-06-01 05:02:42.510380 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.15s 2025-06-01 05:02:42.510389 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.11s 2025-06-01 05:02:42.510399 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 3.86s 2025-06-01 05:02:42.510409 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.86s 2025-06-01 05:02:42.510426 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.61s 2025-06-01 05:02:42.510436 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.34s 2025-06-01 05:02:42.510453 | orchestrator | 2025-06-01 05:02:42 | INFO  | Task 1e128153-904d-4d80-bbe4-9cc38544ff14 is in state SUCCESS 2025-06-01 05:02:42.510464 | orchestrator | 2025-06-01 05:02:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:45.549774 | orchestrator | 2025-06-01 05:02:45 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:02:45.552035 | orchestrator | 2025-06-01 05:02:45 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:45.557313 | orchestrator | 2025-06-01 05:02:45 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:45.559024 | orchestrator | 2025-06-01 05:02:45 | INFO  | Task 01bfc26b-464b-4536-acc3-dd30b85b53b5 is in state STARTED 2025-06-01 05:02:45.559064 | orchestrator | 2025-06-01 05:02:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:48.599830 | orchestrator | 2025-06-01 05:02:48 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:02:48.602332 | orchestrator | 2025-06-01 05:02:48 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:48.604297 | orchestrator | 2025-06-01 05:02:48 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:48.606143 | orchestrator | 2025-06-01 05:02:48 | INFO  | Task 01bfc26b-464b-4536-acc3-dd30b85b53b5 is in state SUCCESS 2025-06-01 05:02:48.606196 | orchestrator | 2025-06-01 05:02:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:51.647280 | orchestrator | 2025-06-01 05:02:51 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:02:51.649251 | orchestrator | 2025-06-01 05:02:51 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:02:51.652611 | orchestrator | 2025-06-01 05:02:51 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:51.655793 | orchestrator | 2025-06-01 05:02:51 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:51.656358 | orchestrator | 2025-06-01 05:02:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:54.703574 | orchestrator | 2025-06-01 05:02:54 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:02:54.707099 | orchestrator | 2025-06-01 05:02:54 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:02:54.712260 | orchestrator | 2025-06-01 05:02:54 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:54.714507 | orchestrator | 2025-06-01 05:02:54 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:54.714539 | orchestrator | 2025-06-01 05:02:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:02:57.756693 | orchestrator | 2025-06-01 05:02:57 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:02:57.758451 | orchestrator | 2025-06-01 05:02:57 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:02:57.759522 | orchestrator | 2025-06-01 05:02:57 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:02:57.760985 | orchestrator | 2025-06-01 05:02:57 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:02:57.761014 | orchestrator | 2025-06-01 05:02:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:00.808157 | orchestrator | 2025-06-01 05:03:00 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:00.810240 | orchestrator | 2025-06-01 05:03:00 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:00.813356 | orchestrator | 2025-06-01 05:03:00 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:00.815452 | orchestrator | 2025-06-01 05:03:00 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:00.815484 | orchestrator | 2025-06-01 05:03:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:03.870165 | orchestrator | 2025-06-01 05:03:03 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:03.872019 | orchestrator | 2025-06-01 05:03:03 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:03.876224 | orchestrator | 2025-06-01 05:03:03 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:03.877633 | orchestrator | 2025-06-01 05:03:03 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:03.877672 | orchestrator | 2025-06-01 05:03:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:06.918445 | orchestrator | 2025-06-01 05:03:06 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:06.920498 | orchestrator | 2025-06-01 05:03:06 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:06.922488 | orchestrator | 2025-06-01 05:03:06 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:06.924615 | orchestrator | 2025-06-01 05:03:06 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:06.925023 | orchestrator | 2025-06-01 05:03:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:09.973025 | orchestrator | 2025-06-01 05:03:09 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:09.974675 | orchestrator | 2025-06-01 05:03:09 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:09.976407 | orchestrator | 2025-06-01 05:03:09 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:09.978196 | orchestrator | 2025-06-01 05:03:09 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:09.978256 | orchestrator | 2025-06-01 05:03:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:13.024026 | orchestrator | 2025-06-01 05:03:13 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:13.024138 | orchestrator | 2025-06-01 05:03:13 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:13.024153 | orchestrator | 2025-06-01 05:03:13 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:13.024165 | orchestrator | 2025-06-01 05:03:13 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:13.024176 | orchestrator | 2025-06-01 05:03:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:16.066002 | orchestrator | 2025-06-01 05:03:16 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:16.066418 | orchestrator | 2025-06-01 05:03:16 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:16.067325 | orchestrator | 2025-06-01 05:03:16 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:16.068237 | orchestrator | 2025-06-01 05:03:16 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:16.068297 | orchestrator | 2025-06-01 05:03:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:19.118560 | orchestrator | 2025-06-01 05:03:19 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:19.121125 | orchestrator | 2025-06-01 05:03:19 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:19.123477 | orchestrator | 2025-06-01 05:03:19 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:19.126079 | orchestrator | 2025-06-01 05:03:19 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:19.126113 | orchestrator | 2025-06-01 05:03:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:22.179230 | orchestrator | 2025-06-01 05:03:22 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:22.179325 | orchestrator | 2025-06-01 05:03:22 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:22.181829 | orchestrator | 2025-06-01 05:03:22 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:22.185884 | orchestrator | 2025-06-01 05:03:22 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:22.186057 | orchestrator | 2025-06-01 05:03:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:25.233633 | orchestrator | 2025-06-01 05:03:25 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:25.234522 | orchestrator | 2025-06-01 05:03:25 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:25.234985 | orchestrator | 2025-06-01 05:03:25 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:25.239078 | orchestrator | 2025-06-01 05:03:25 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:25.239122 | orchestrator | 2025-06-01 05:03:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:28.290279 | orchestrator | 2025-06-01 05:03:28 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:28.291165 | orchestrator | 2025-06-01 05:03:28 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:28.291218 | orchestrator | 2025-06-01 05:03:28 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:28.291966 | orchestrator | 2025-06-01 05:03:28 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:28.291988 | orchestrator | 2025-06-01 05:03:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:31.323623 | orchestrator | 2025-06-01 05:03:31 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:31.325281 | orchestrator | 2025-06-01 05:03:31 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:31.327476 | orchestrator | 2025-06-01 05:03:31 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:31.329757 | orchestrator | 2025-06-01 05:03:31 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:31.330133 | orchestrator | 2025-06-01 05:03:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:34.379161 | orchestrator | 2025-06-01 05:03:34 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:34.380580 | orchestrator | 2025-06-01 05:03:34 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:34.382181 | orchestrator | 2025-06-01 05:03:34 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:34.383941 | orchestrator | 2025-06-01 05:03:34 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:34.383973 | orchestrator | 2025-06-01 05:03:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:37.418512 | orchestrator | 2025-06-01 05:03:37 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:37.420141 | orchestrator | 2025-06-01 05:03:37 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:37.421506 | orchestrator | 2025-06-01 05:03:37 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:37.424663 | orchestrator | 2025-06-01 05:03:37 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:37.424722 | orchestrator | 2025-06-01 05:03:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:40.470445 | orchestrator | 2025-06-01 05:03:40 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:40.474968 | orchestrator | 2025-06-01 05:03:40 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:40.475040 | orchestrator | 2025-06-01 05:03:40 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:40.476612 | orchestrator | 2025-06-01 05:03:40 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:40.476680 | orchestrator | 2025-06-01 05:03:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:43.524790 | orchestrator | 2025-06-01 05:03:43 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:43.528057 | orchestrator | 2025-06-01 05:03:43 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:43.530387 | orchestrator | 2025-06-01 05:03:43 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:43.531956 | orchestrator | 2025-06-01 05:03:43 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:43.531986 | orchestrator | 2025-06-01 05:03:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:46.575681 | orchestrator | 2025-06-01 05:03:46 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:46.576708 | orchestrator | 2025-06-01 05:03:46 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:46.578209 | orchestrator | 2025-06-01 05:03:46 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:46.579183 | orchestrator | 2025-06-01 05:03:46 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:46.579232 | orchestrator | 2025-06-01 05:03:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:49.622372 | orchestrator | 2025-06-01 05:03:49 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:49.623296 | orchestrator | 2025-06-01 05:03:49 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:49.623514 | orchestrator | 2025-06-01 05:03:49 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:49.624484 | orchestrator | 2025-06-01 05:03:49 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:49.624531 | orchestrator | 2025-06-01 05:03:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:52.662758 | orchestrator | 2025-06-01 05:03:52 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:52.664337 | orchestrator | 2025-06-01 05:03:52 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:52.665661 | orchestrator | 2025-06-01 05:03:52 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:52.667515 | orchestrator | 2025-06-01 05:03:52 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:52.667618 | orchestrator | 2025-06-01 05:03:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:55.723325 | orchestrator | 2025-06-01 05:03:55 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:55.725049 | orchestrator | 2025-06-01 05:03:55 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:55.726908 | orchestrator | 2025-06-01 05:03:55 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:55.728132 | orchestrator | 2025-06-01 05:03:55 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:55.728192 | orchestrator | 2025-06-01 05:03:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:03:58.773579 | orchestrator | 2025-06-01 05:03:58 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:03:58.773705 | orchestrator | 2025-06-01 05:03:58 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:03:58.775491 | orchestrator | 2025-06-01 05:03:58 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:03:58.777485 | orchestrator | 2025-06-01 05:03:58 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:03:58.777551 | orchestrator | 2025-06-01 05:03:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:01.831176 | orchestrator | 2025-06-01 05:04:01 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:01.833345 | orchestrator | 2025-06-01 05:04:01 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:01.835738 | orchestrator | 2025-06-01 05:04:01 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:01.838554 | orchestrator | 2025-06-01 05:04:01 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:01.838579 | orchestrator | 2025-06-01 05:04:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:04.886926 | orchestrator | 2025-06-01 05:04:04 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:04.888345 | orchestrator | 2025-06-01 05:04:04 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:04.890102 | orchestrator | 2025-06-01 05:04:04 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:04.891561 | orchestrator | 2025-06-01 05:04:04 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:04.891610 | orchestrator | 2025-06-01 05:04:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:07.935804 | orchestrator | 2025-06-01 05:04:07 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:07.936611 | orchestrator | 2025-06-01 05:04:07 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:07.938095 | orchestrator | 2025-06-01 05:04:07 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:07.939443 | orchestrator | 2025-06-01 05:04:07 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:07.939558 | orchestrator | 2025-06-01 05:04:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:10.981687 | orchestrator | 2025-06-01 05:04:10 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:10.982400 | orchestrator | 2025-06-01 05:04:10 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:10.982646 | orchestrator | 2025-06-01 05:04:10 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:10.983671 | orchestrator | 2025-06-01 05:04:10 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:10.983728 | orchestrator | 2025-06-01 05:04:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:14.037240 | orchestrator | 2025-06-01 05:04:14 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:14.037543 | orchestrator | 2025-06-01 05:04:14 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:14.039179 | orchestrator | 2025-06-01 05:04:14 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:14.041211 | orchestrator | 2025-06-01 05:04:14 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:14.041274 | orchestrator | 2025-06-01 05:04:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:17.074100 | orchestrator | 2025-06-01 05:04:17 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:17.075139 | orchestrator | 2025-06-01 05:04:17 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:17.077350 | orchestrator | 2025-06-01 05:04:17 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:17.077927 | orchestrator | 2025-06-01 05:04:17 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:17.077948 | orchestrator | 2025-06-01 05:04:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:20.103098 | orchestrator | 2025-06-01 05:04:20 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:20.103271 | orchestrator | 2025-06-01 05:04:20 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:20.103762 | orchestrator | 2025-06-01 05:04:20 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:20.104351 | orchestrator | 2025-06-01 05:04:20 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:20.104402 | orchestrator | 2025-06-01 05:04:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:23.129576 | orchestrator | 2025-06-01 05:04:23 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:23.132081 | orchestrator | 2025-06-01 05:04:23 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:23.134483 | orchestrator | 2025-06-01 05:04:23 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:23.135707 | orchestrator | 2025-06-01 05:04:23 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:23.135795 | orchestrator | 2025-06-01 05:04:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:26.172621 | orchestrator | 2025-06-01 05:04:26 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:26.174299 | orchestrator | 2025-06-01 05:04:26 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:26.176777 | orchestrator | 2025-06-01 05:04:26 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state STARTED 2025-06-01 05:04:26.178073 | orchestrator | 2025-06-01 05:04:26 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:26.178129 | orchestrator | 2025-06-01 05:04:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:29.229450 | orchestrator | 2025-06-01 05:04:29 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:29.232187 | orchestrator | 2025-06-01 05:04:29 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:29.235653 | orchestrator | 2025-06-01 05:04:29 | INFO  | Task a55daf2a-5395-4e76-9275-c89bf674e6da is in state SUCCESS 2025-06-01 05:04:29.237994 | orchestrator | 2025-06-01 05:04:29.238065 | orchestrator | 2025-06-01 05:04:29.238074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:04:29.238083 | orchestrator | 2025-06-01 05:04:29.238089 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:04:29.238096 | orchestrator | Sunday 01 June 2025 05:02:45 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-06-01 05:04:29.238103 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:29.238111 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:04:29.238118 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:04:29.238124 | orchestrator | 2025-06-01 05:04:29.238130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:04:29.238137 | orchestrator | Sunday 01 June 2025 05:02:46 +0000 (0:00:00.318) 0:00:00.501 *********** 2025-06-01 05:04:29.238144 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-01 05:04:29.238151 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-01 05:04:29.238157 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-01 05:04:29.238164 | orchestrator | 2025-06-01 05:04:29.238170 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-01 05:04:29.238176 | orchestrator | 2025-06-01 05:04:29.238183 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-01 05:04:29.238189 | orchestrator | Sunday 01 June 2025 05:02:46 +0000 (0:00:00.772) 0:00:01.273 *********** 2025-06-01 05:04:29.238195 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:04:29.238201 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:04:29.238207 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:29.238214 | orchestrator | 2025-06-01 05:04:29.238303 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:04:29.238311 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:04:29.238320 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:04:29.238326 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:04:29.238332 | orchestrator | 2025-06-01 05:04:29.238339 | orchestrator | 2025-06-01 05:04:29.238345 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:04:29.238351 | orchestrator | Sunday 01 June 2025 05:02:47 +0000 (0:00:00.666) 0:00:01.940 *********** 2025-06-01 05:04:29.238357 | orchestrator | =============================================================================== 2025-06-01 05:04:29.238364 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-06-01 05:04:29.238370 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.67s 2025-06-01 05:04:29.238376 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-01 05:04:29.238382 | orchestrator | 2025-06-01 05:04:29.238389 | orchestrator | 2025-06-01 05:04:29.238395 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:04:29.238422 | orchestrator | 2025-06-01 05:04:29.238429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:04:29.238435 | orchestrator | Sunday 01 June 2025 05:02:37 +0000 (0:00:00.336) 0:00:00.336 *********** 2025-06-01 05:04:29.238442 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:29.238448 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:04:29.238454 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:04:29.238461 | orchestrator | 2025-06-01 05:04:29.238467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:04:29.238473 | orchestrator | Sunday 01 June 2025 05:02:37 +0000 (0:00:00.355) 0:00:00.692 *********** 2025-06-01 05:04:29.238480 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-01 05:04:29.238487 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-01 05:04:29.238493 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-01 05:04:29.238499 | orchestrator | 2025-06-01 05:04:29.238505 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-01 05:04:29.238512 | orchestrator | 2025-06-01 05:04:29.238518 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-01 05:04:29.238524 | orchestrator | Sunday 01 June 2025 05:02:38 +0000 (0:00:00.793) 0:00:01.486 *********** 2025-06-01 05:04:29.238530 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:29.238537 | orchestrator | 2025-06-01 05:04:29.238543 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-01 05:04:29.238549 | orchestrator | Sunday 01 June 2025 05:02:39 +0000 (0:00:00.863) 0:00:02.349 *********** 2025-06-01 05:04:29.238561 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-01 05:04:29.238572 | orchestrator | 2025-06-01 05:04:29.238581 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-01 05:04:29.238590 | orchestrator | Sunday 01 June 2025 05:02:42 +0000 (0:00:03.482) 0:00:05.832 *********** 2025-06-01 05:04:29.238601 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-01 05:04:29.238613 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-01 05:04:29.238624 | orchestrator | 2025-06-01 05:04:29.238635 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-01 05:04:29.238647 | orchestrator | Sunday 01 June 2025 05:02:48 +0000 (0:00:06.445) 0:00:12.277 *********** 2025-06-01 05:04:29.238657 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:04:29.238669 | orchestrator | 2025-06-01 05:04:29.238677 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-01 05:04:29.238684 | orchestrator | Sunday 01 June 2025 05:02:52 +0000 (0:00:03.056) 0:00:15.334 *********** 2025-06-01 05:04:29.238702 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:04:29.238710 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-01 05:04:29.238717 | orchestrator | 2025-06-01 05:04:29.238725 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-01 05:04:29.238733 | orchestrator | Sunday 01 June 2025 05:02:55 +0000 (0:00:03.575) 0:00:18.909 *********** 2025-06-01 05:04:29.238741 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:04:29.238749 | orchestrator | 2025-06-01 05:04:29.238756 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-01 05:04:29.238764 | orchestrator | Sunday 01 June 2025 05:02:58 +0000 (0:00:03.308) 0:00:22.218 *********** 2025-06-01 05:04:29.238772 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-01 05:04:29.238779 | orchestrator | 2025-06-01 05:04:29.238786 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-01 05:04:29.238794 | orchestrator | Sunday 01 June 2025 05:03:02 +0000 (0:00:03.948) 0:00:26.167 *********** 2025-06-01 05:04:29.238802 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.238816 | orchestrator | 2025-06-01 05:04:29.238823 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-01 05:04:29.238831 | orchestrator | Sunday 01 June 2025 05:03:05 +0000 (0:00:02.988) 0:00:29.155 *********** 2025-06-01 05:04:29.238838 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.238846 | orchestrator | 2025-06-01 05:04:29.238853 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-01 05:04:29.238861 | orchestrator | Sunday 01 June 2025 05:03:09 +0000 (0:00:03.616) 0:00:32.772 *********** 2025-06-01 05:04:29.238868 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.238899 | orchestrator | 2025-06-01 05:04:29.238907 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-01 05:04:29.238915 | orchestrator | Sunday 01 June 2025 05:03:12 +0000 (0:00:03.433) 0:00:36.206 *********** 2025-06-01 05:04:29.238926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.238938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.238946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.238960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.238974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.238981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.238988 | orchestrator | 2025-06-01 05:04:29.238994 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-01 05:04:29.239001 | orchestrator | Sunday 01 June 2025 05:03:14 +0000 (0:00:01.301) 0:00:37.507 *********** 2025-06-01 05:04:29.239008 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:29.239014 | orchestrator | 2025-06-01 05:04:29.239020 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-01 05:04:29.239026 | orchestrator | Sunday 01 June 2025 05:03:14 +0000 (0:00:00.126) 0:00:37.634 *********** 2025-06-01 05:04:29.239033 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:29.239039 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:29.239045 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:29.239051 | orchestrator | 2025-06-01 05:04:29.239058 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-01 05:04:29.239064 | orchestrator | Sunday 01 June 2025 05:03:14 +0000 (0:00:00.584) 0:00:38.219 *********** 2025-06-01 05:04:29.239070 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 05:04:29.239076 | orchestrator | 2025-06-01 05:04:29.239082 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-01 05:04:29.239089 | orchestrator | Sunday 01 June 2025 05:03:15 +0000 (0:00:00.927) 0:00:39.147 *********** 2025-06-01 05:04:29.239095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239152 | orchestrator | 2025-06-01 05:04:29.239158 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-01 05:04:29.239165 | orchestrator | Sunday 01 June 2025 05:03:18 +0000 (0:00:02.313) 0:00:41.460 *********** 2025-06-01 05:04:29.239171 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:29.239177 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:04:29.239184 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:04:29.239190 | orchestrator | 2025-06-01 05:04:29.239196 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-01 05:04:29.239206 | orchestrator | Sunday 01 June 2025 05:03:18 +0000 (0:00:00.309) 0:00:41.770 *********** 2025-06-01 05:04:29.239213 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:29.239219 | orchestrator | 2025-06-01 05:04:29.239226 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-01 05:04:29.239232 | orchestrator | Sunday 01 June 2025 05:03:19 +0000 (0:00:00.746) 0:00:42.517 *********** 2025-06-01 05:04:29.239238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239288 | orchestrator | 2025-06-01 05:04:29.239295 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-01 05:04:29.239301 | orchestrator | Sunday 01 June 2025 05:03:21 +0000 (0:00:02.263) 0:00:44.780 *********** 2025-06-01 05:04:29.239308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239326 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:29.239333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239352 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:29.239358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239372 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:29.239378 | orchestrator | 2025-06-01 05:04:29.239384 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-01 05:04:29.239391 | orchestrator | Sunday 01 June 2025 05:03:22 +0000 (0:00:00.592) 0:00:45.373 *********** 2025-06-01 05:04:29.239397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239415 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:29.239425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239439 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:29.239446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239464 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:29.239470 | orchestrator | 2025-06-01 05:04:29.239476 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-01 05:04:29.239482 | orchestrator | Sunday 01 June 2025 05:03:24 +0000 (0:00:02.127) 0:00:47.500 *********** 2025-06-01 05:04:29.239713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239785 | orchestrator | 2025-06-01 05:04:29.239791 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-01 05:04:29.239798 | orchestrator | Sunday 01 June 2025 05:03:27 +0000 (0:00:03.178) 0:00:50.679 *********** 2025-06-01 05:04:29.239804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.239829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.239854 | orchestrator | 2025-06-01 05:04:29.239860 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-01 05:04:29.239866 | orchestrator | Sunday 01 June 2025 05:03:32 +0000 (0:00:05.474) 0:00:56.153 *********** 2025-06-01 05:04:29.239899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239920 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:29.239926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239946 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:29.239953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 05:04:29.239960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:29.239973 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:29.239980 | orchestrator | 2025-06-01 05:04:29.239987 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-01 05:04:29.239994 | orchestrator | Sunday 01 June 2025 05:03:33 +0000 (0:00:00.845) 0:00:56.998 *********** 2025-06-01 05:04:29.240001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.240012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.240019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 05:04:29.240026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.240040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.240047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:29.240054 | orchestrator | 2025-06-01 05:04:29.240060 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-01 05:04:29.240067 | orchestrator | Sunday 01 June 2025 05:03:35 +0000 (0:00:02.061) 0:00:59.059 *********** 2025-06-01 05:04:29.240074 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:29.240080 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:29.240087 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:29.240094 | orchestrator | 2025-06-01 05:04:29.240100 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-01 05:04:29.240107 | orchestrator | Sunday 01 June 2025 05:03:36 +0000 (0:00:00.307) 0:00:59.366 *********** 2025-06-01 05:04:29.240113 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.240120 | orchestrator | 2025-06-01 05:04:29.240126 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-01 05:04:29.240133 | orchestrator | Sunday 01 June 2025 05:03:38 +0000 (0:00:02.064) 0:01:01.431 *********** 2025-06-01 05:04:29.240139 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.240146 | orchestrator | 2025-06-01 05:04:29.240152 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-01 05:04:29.240159 | orchestrator | Sunday 01 June 2025 05:03:40 +0000 (0:00:02.085) 0:01:03.517 *********** 2025-06-01 05:04:29.240169 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.240175 | orchestrator | 2025-06-01 05:04:29.240182 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-01 05:04:29.240189 | orchestrator | Sunday 01 June 2025 05:03:58 +0000 (0:00:18.292) 0:01:21.810 *********** 2025-06-01 05:04:29.240195 | orchestrator | 2025-06-01 05:04:29.240201 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-01 05:04:29.240208 | orchestrator | Sunday 01 June 2025 05:03:58 +0000 (0:00:00.066) 0:01:21.876 *********** 2025-06-01 05:04:29.240214 | orchestrator | 2025-06-01 05:04:29.240221 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-01 05:04:29.240228 | orchestrator | Sunday 01 June 2025 05:03:58 +0000 (0:00:00.076) 0:01:21.952 *********** 2025-06-01 05:04:29.240243 | orchestrator | 2025-06-01 05:04:29.240249 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-01 05:04:29.240256 | orchestrator | Sunday 01 June 2025 05:03:58 +0000 (0:00:00.074) 0:01:22.027 *********** 2025-06-01 05:04:29.240262 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.240269 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:29.240275 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:29.240282 | orchestrator | 2025-06-01 05:04:29.240289 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-01 05:04:29.240295 | orchestrator | Sunday 01 June 2025 05:04:15 +0000 (0:00:16.341) 0:01:38.369 *********** 2025-06-01 05:04:29.240302 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:29.240308 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:29.240315 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:29.240321 | orchestrator | 2025-06-01 05:04:29.240328 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:04:29.240335 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 05:04:29.240342 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 05:04:29.240349 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 05:04:29.240356 | orchestrator | 2025-06-01 05:04:29.240362 | orchestrator | 2025-06-01 05:04:29.240369 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:04:29.240375 | orchestrator | Sunday 01 June 2025 05:04:26 +0000 (0:00:11.871) 0:01:50.240 *********** 2025-06-01 05:04:29.240381 | orchestrator | =============================================================================== 2025-06-01 05:04:29.240388 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.29s 2025-06-01 05:04:29.240395 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.34s 2025-06-01 05:04:29.240401 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.87s 2025-06-01 05:04:29.240408 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.45s 2025-06-01 05:04:29.240414 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.47s 2025-06-01 05:04:29.240421 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.95s 2025-06-01 05:04:29.240427 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.62s 2025-06-01 05:04:29.240433 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.58s 2025-06-01 05:04:29.240440 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.48s 2025-06-01 05:04:29.240446 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.43s 2025-06-01 05:04:29.240453 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.31s 2025-06-01 05:04:29.240459 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.18s 2025-06-01 05:04:29.240466 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.06s 2025-06-01 05:04:29.240472 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.99s 2025-06-01 05:04:29.240479 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.31s 2025-06-01 05:04:29.240526 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.26s 2025-06-01 05:04:29.240535 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.13s 2025-06-01 05:04:29.240541 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.09s 2025-06-01 05:04:29.240548 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.07s 2025-06-01 05:04:29.240559 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.06s 2025-06-01 05:04:29.240566 | orchestrator | 2025-06-01 05:04:29 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:29.240572 | orchestrator | 2025-06-01 05:04:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:32.283745 | orchestrator | 2025-06-01 05:04:32 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:32.286604 | orchestrator | 2025-06-01 05:04:32 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:32.288922 | orchestrator | 2025-06-01 05:04:32 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:32.289178 | orchestrator | 2025-06-01 05:04:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:35.334636 | orchestrator | 2025-06-01 05:04:35 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:35.335140 | orchestrator | 2025-06-01 05:04:35 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:35.337351 | orchestrator | 2025-06-01 05:04:35 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:35.338104 | orchestrator | 2025-06-01 05:04:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:38.386844 | orchestrator | 2025-06-01 05:04:38 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:38.388062 | orchestrator | 2025-06-01 05:04:38 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:38.388328 | orchestrator | 2025-06-01 05:04:38 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:38.388356 | orchestrator | 2025-06-01 05:04:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:41.432836 | orchestrator | 2025-06-01 05:04:41 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:41.435260 | orchestrator | 2025-06-01 05:04:41 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:41.437427 | orchestrator | 2025-06-01 05:04:41 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:41.437476 | orchestrator | 2025-06-01 05:04:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:44.485005 | orchestrator | 2025-06-01 05:04:44 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:44.486825 | orchestrator | 2025-06-01 05:04:44 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:44.489119 | orchestrator | 2025-06-01 05:04:44 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:44.489353 | orchestrator | 2025-06-01 05:04:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:47.532547 | orchestrator | 2025-06-01 05:04:47 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:47.532684 | orchestrator | 2025-06-01 05:04:47 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:47.533098 | orchestrator | 2025-06-01 05:04:47 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:47.533128 | orchestrator | 2025-06-01 05:04:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:50.587738 | orchestrator | 2025-06-01 05:04:50 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:50.588043 | orchestrator | 2025-06-01 05:04:50 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:50.589034 | orchestrator | 2025-06-01 05:04:50 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:50.589073 | orchestrator | 2025-06-01 05:04:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:53.637828 | orchestrator | 2025-06-01 05:04:53 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:53.640486 | orchestrator | 2025-06-01 05:04:53 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:53.642232 | orchestrator | 2025-06-01 05:04:53 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state STARTED 2025-06-01 05:04:53.642574 | orchestrator | 2025-06-01 05:04:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:56.685637 | orchestrator | 2025-06-01 05:04:56 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:56.687960 | orchestrator | 2025-06-01 05:04:56 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:56.693941 | orchestrator | 2025-06-01 05:04:56 | INFO  | Task 38934cac-3c57-4f8f-af82-709ca12a456d is in state SUCCESS 2025-06-01 05:04:56.696231 | orchestrator | 2025-06-01 05:04:56.696283 | orchestrator | 2025-06-01 05:04:56.696296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:04:56.696310 | orchestrator | 2025-06-01 05:04:56.696321 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-01 05:04:56.696333 | orchestrator | Sunday 01 June 2025 04:56:00 +0000 (0:00:00.208) 0:00:00.208 *********** 2025-06-01 05:04:56.696345 | orchestrator | changed: [testbed-manager] 2025-06-01 05:04:56.696358 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.696369 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.696380 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.696391 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.696402 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.696413 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.696423 | orchestrator | 2025-06-01 05:04:56.696435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:04:56.696445 | orchestrator | Sunday 01 June 2025 04:56:01 +0000 (0:00:00.781) 0:00:00.990 *********** 2025-06-01 05:04:56.696457 | orchestrator | changed: [testbed-manager] 2025-06-01 05:04:56.696467 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.696479 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.696563 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.697059 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.697080 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.697091 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.697102 | orchestrator | 2025-06-01 05:04:56.697114 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:04:56.697125 | orchestrator | Sunday 01 June 2025 04:56:01 +0000 (0:00:00.575) 0:00:01.565 *********** 2025-06-01 05:04:56.697137 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-01 05:04:56.697253 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-01 05:04:56.697273 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-01 05:04:56.697290 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-01 05:04:56.697307 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-01 05:04:56.697324 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-01 05:04:56.697341 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-01 05:04:56.697359 | orchestrator | 2025-06-01 05:04:56.697378 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-01 05:04:56.697397 | orchestrator | 2025-06-01 05:04:56.697415 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-01 05:04:56.697465 | orchestrator | Sunday 01 June 2025 04:56:02 +0000 (0:00:00.742) 0:00:02.308 *********** 2025-06-01 05:04:56.697486 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.697504 | orchestrator | 2025-06-01 05:04:56.697523 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-01 05:04:56.697534 | orchestrator | Sunday 01 June 2025 04:56:03 +0000 (0:00:00.597) 0:00:02.905 *********** 2025-06-01 05:04:56.697546 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-01 05:04:56.697557 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-01 05:04:56.697568 | orchestrator | 2025-06-01 05:04:56.697578 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-01 05:04:56.697589 | orchestrator | Sunday 01 June 2025 04:56:06 +0000 (0:00:03.689) 0:00:06.595 *********** 2025-06-01 05:04:56.697600 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 05:04:56.697611 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 05:04:56.697622 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.697632 | orchestrator | 2025-06-01 05:04:56.697643 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-01 05:04:56.697656 | orchestrator | Sunday 01 June 2025 04:56:10 +0000 (0:00:03.346) 0:00:09.941 *********** 2025-06-01 05:04:56.697669 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.697682 | orchestrator | 2025-06-01 05:04:56.698313 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-01 05:04:56.698340 | orchestrator | Sunday 01 June 2025 04:56:10 +0000 (0:00:00.842) 0:00:10.783 *********** 2025-06-01 05:04:56.698352 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.698363 | orchestrator | 2025-06-01 05:04:56.698374 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-01 05:04:56.698385 | orchestrator | Sunday 01 June 2025 04:56:12 +0000 (0:00:01.398) 0:00:12.182 *********** 2025-06-01 05:04:56.698396 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.698407 | orchestrator | 2025-06-01 05:04:56.698418 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 05:04:56.698429 | orchestrator | Sunday 01 June 2025 04:56:15 +0000 (0:00:02.793) 0:00:14.976 *********** 2025-06-01 05:04:56.698439 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.698451 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.698461 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.698472 | orchestrator | 2025-06-01 05:04:56.698483 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-01 05:04:56.698495 | orchestrator | Sunday 01 June 2025 04:56:15 +0000 (0:00:00.332) 0:00:15.308 *********** 2025-06-01 05:04:56.698506 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.698517 | orchestrator | 2025-06-01 05:04:56.698528 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-01 05:04:56.698539 | orchestrator | Sunday 01 June 2025 04:56:42 +0000 (0:00:26.883) 0:00:42.192 *********** 2025-06-01 05:04:56.698550 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.698560 | orchestrator | 2025-06-01 05:04:56.698651 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-01 05:04:56.698667 | orchestrator | Sunday 01 June 2025 04:56:54 +0000 (0:00:12.022) 0:00:54.214 *********** 2025-06-01 05:04:56.698678 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.698689 | orchestrator | 2025-06-01 05:04:56.698700 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-01 05:04:56.698711 | orchestrator | Sunday 01 June 2025 04:57:06 +0000 (0:00:11.726) 0:01:05.940 *********** 2025-06-01 05:04:56.699177 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.699198 | orchestrator | 2025-06-01 05:04:56.699208 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-01 05:04:56.699218 | orchestrator | Sunday 01 June 2025 04:57:06 +0000 (0:00:00.922) 0:01:06.863 *********** 2025-06-01 05:04:56.699228 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.699253 | orchestrator | 2025-06-01 05:04:56.699263 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 05:04:56.699273 | orchestrator | Sunday 01 June 2025 04:57:07 +0000 (0:00:00.570) 0:01:07.434 *********** 2025-06-01 05:04:56.699283 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.699293 | orchestrator | 2025-06-01 05:04:56.699302 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-01 05:04:56.699312 | orchestrator | Sunday 01 June 2025 04:57:08 +0000 (0:00:00.570) 0:01:08.005 *********** 2025-06-01 05:04:56.699322 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.699331 | orchestrator | 2025-06-01 05:04:56.699341 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-01 05:04:56.699351 | orchestrator | Sunday 01 June 2025 04:57:25 +0000 (0:00:17.139) 0:01:25.144 *********** 2025-06-01 05:04:56.699360 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.699370 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699379 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699390 | orchestrator | 2025-06-01 05:04:56.699399 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-01 05:04:56.699409 | orchestrator | 2025-06-01 05:04:56.699418 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-01 05:04:56.699428 | orchestrator | Sunday 01 June 2025 04:57:25 +0000 (0:00:00.338) 0:01:25.482 *********** 2025-06-01 05:04:56.699438 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.699447 | orchestrator | 2025-06-01 05:04:56.699457 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-01 05:04:56.699467 | orchestrator | Sunday 01 June 2025 04:57:26 +0000 (0:00:00.556) 0:01:26.039 *********** 2025-06-01 05:04:56.699476 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699486 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699496 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.699505 | orchestrator | 2025-06-01 05:04:56.699515 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-01 05:04:56.699525 | orchestrator | Sunday 01 June 2025 04:57:28 +0000 (0:00:02.003) 0:01:28.042 *********** 2025-06-01 05:04:56.699534 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699544 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699554 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.699563 | orchestrator | 2025-06-01 05:04:56.699573 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-01 05:04:56.699583 | orchestrator | Sunday 01 June 2025 04:57:30 +0000 (0:00:02.031) 0:01:30.073 *********** 2025-06-01 05:04:56.699592 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.699602 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699611 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699621 | orchestrator | 2025-06-01 05:04:56.699631 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-01 05:04:56.699640 | orchestrator | Sunday 01 June 2025 04:57:30 +0000 (0:00:00.450) 0:01:30.524 *********** 2025-06-01 05:04:56.699650 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 05:04:56.699659 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699669 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 05:04:56.699679 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699689 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-01 05:04:56.699698 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-01 05:04:56.699708 | orchestrator | 2025-06-01 05:04:56.699718 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-01 05:04:56.699728 | orchestrator | Sunday 01 June 2025 04:57:38 +0000 (0:00:08.377) 0:01:38.901 *********** 2025-06-01 05:04:56.699737 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.699747 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699763 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699773 | orchestrator | 2025-06-01 05:04:56.699782 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-01 05:04:56.699794 | orchestrator | Sunday 01 June 2025 04:57:39 +0000 (0:00:00.469) 0:01:39.370 *********** 2025-06-01 05:04:56.699806 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 05:04:56.699818 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.699829 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 05:04:56.699840 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699852 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 05:04:56.699863 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699896 | orchestrator | 2025-06-01 05:04:56.699908 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-01 05:04:56.699919 | orchestrator | Sunday 01 June 2025 04:57:40 +0000 (0:00:01.510) 0:01:40.881 *********** 2025-06-01 05:04:56.699931 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.699943 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.699955 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.699966 | orchestrator | 2025-06-01 05:04:56.699976 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-01 05:04:56.699986 | orchestrator | Sunday 01 June 2025 04:57:42 +0000 (0:00:01.959) 0:01:42.840 *********** 2025-06-01 05:04:56.699995 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700005 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700014 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.700024 | orchestrator | 2025-06-01 05:04:56.700034 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-01 05:04:56.700043 | orchestrator | Sunday 01 June 2025 04:57:44 +0000 (0:00:01.481) 0:01:44.321 *********** 2025-06-01 05:04:56.700053 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700063 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700144 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.700158 | orchestrator | 2025-06-01 05:04:56.700168 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-01 05:04:56.700177 | orchestrator | Sunday 01 June 2025 04:57:47 +0000 (0:00:02.953) 0:01:47.274 *********** 2025-06-01 05:04:56.700187 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700196 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700206 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.700216 | orchestrator | 2025-06-01 05:04:56.700225 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-01 05:04:56.700235 | orchestrator | Sunday 01 June 2025 04:58:07 +0000 (0:00:19.909) 0:02:07.184 *********** 2025-06-01 05:04:56.700244 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700254 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700264 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.700273 | orchestrator | 2025-06-01 05:04:56.700283 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-01 05:04:56.700292 | orchestrator | Sunday 01 June 2025 04:58:17 +0000 (0:00:10.607) 0:02:17.792 *********** 2025-06-01 05:04:56.700302 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700312 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.700321 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700331 | orchestrator | 2025-06-01 05:04:56.700341 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-01 05:04:56.700350 | orchestrator | Sunday 01 June 2025 04:58:19 +0000 (0:00:01.475) 0:02:19.267 *********** 2025-06-01 05:04:56.700360 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700370 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700379 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.700389 | orchestrator | 2025-06-01 05:04:56.700398 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-01 05:04:56.700408 | orchestrator | Sunday 01 June 2025 04:58:29 +0000 (0:00:09.706) 0:02:28.973 *********** 2025-06-01 05:04:56.700425 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.700435 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700444 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700454 | orchestrator | 2025-06-01 05:04:56.700464 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-01 05:04:56.700473 | orchestrator | Sunday 01 June 2025 04:58:30 +0000 (0:00:01.746) 0:02:30.720 *********** 2025-06-01 05:04:56.700483 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.700493 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.700502 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.700512 | orchestrator | 2025-06-01 05:04:56.700521 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-01 05:04:56.700531 | orchestrator | 2025-06-01 05:04:56.700540 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 05:04:56.700550 | orchestrator | Sunday 01 June 2025 04:58:31 +0000 (0:00:00.340) 0:02:31.061 *********** 2025-06-01 05:04:56.700559 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.700570 | orchestrator | 2025-06-01 05:04:56.700580 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-01 05:04:56.700589 | orchestrator | Sunday 01 June 2025 04:58:31 +0000 (0:00:00.551) 0:02:31.613 *********** 2025-06-01 05:04:56.700599 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-01 05:04:56.700609 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-01 05:04:56.700618 | orchestrator | 2025-06-01 05:04:56.700628 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-01 05:04:56.700638 | orchestrator | Sunday 01 June 2025 04:58:34 +0000 (0:00:02.918) 0:02:34.532 *********** 2025-06-01 05:04:56.700647 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-01 05:04:56.700659 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-01 05:04:56.700669 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-01 05:04:56.700679 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-01 05:04:56.700688 | orchestrator | 2025-06-01 05:04:56.700718 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-01 05:04:56.700729 | orchestrator | Sunday 01 June 2025 04:58:40 +0000 (0:00:06.211) 0:02:40.744 *********** 2025-06-01 05:04:56.700739 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:04:56.700749 | orchestrator | 2025-06-01 05:04:56.700758 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-01 05:04:56.700768 | orchestrator | Sunday 01 June 2025 04:58:43 +0000 (0:00:02.937) 0:02:43.681 *********** 2025-06-01 05:04:56.700777 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:04:56.700787 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-01 05:04:56.700796 | orchestrator | 2025-06-01 05:04:56.700806 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-01 05:04:56.700815 | orchestrator | Sunday 01 June 2025 04:58:47 +0000 (0:00:03.606) 0:02:47.287 *********** 2025-06-01 05:04:56.700825 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:04:56.700834 | orchestrator | 2025-06-01 05:04:56.700844 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-01 05:04:56.700854 | orchestrator | Sunday 01 June 2025 04:58:50 +0000 (0:00:03.051) 0:02:50.339 *********** 2025-06-01 05:04:56.700863 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-01 05:04:56.700892 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-01 05:04:56.700908 | orchestrator | 2025-06-01 05:04:56.700918 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-01 05:04:56.700999 | orchestrator | Sunday 01 June 2025 04:58:57 +0000 (0:00:07.469) 0:02:57.808 *********** 2025-06-01 05:04:56.701018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.701088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.701119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.701129 | orchestrator | 2025-06-01 05:04:56.701139 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-01 05:04:56.701149 | orchestrator | Sunday 01 June 2025 04:58:59 +0000 (0:00:01.816) 0:02:59.625 *********** 2025-06-01 05:04:56.701159 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.701169 | orchestrator | 2025-06-01 05:04:56.701178 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-01 05:04:56.701188 | orchestrator | Sunday 01 June 2025 04:58:59 +0000 (0:00:00.251) 0:02:59.876 *********** 2025-06-01 05:04:56.701197 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.701207 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.701217 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.701226 | orchestrator | 2025-06-01 05:04:56.701236 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-01 05:04:56.701245 | orchestrator | Sunday 01 June 2025 04:59:00 +0000 (0:00:00.924) 0:03:00.801 *********** 2025-06-01 05:04:56.701255 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 05:04:56.701264 | orchestrator | 2025-06-01 05:04:56.701274 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-01 05:04:56.701284 | orchestrator | Sunday 01 June 2025 04:59:02 +0000 (0:00:01.202) 0:03:02.004 *********** 2025-06-01 05:04:56.701293 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.701303 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.701312 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.701322 | orchestrator | 2025-06-01 05:04:56.701332 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 05:04:56.701342 | orchestrator | Sunday 01 June 2025 04:59:02 +0000 (0:00:00.308) 0:03:02.313 *********** 2025-06-01 05:04:56.701351 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.701361 | orchestrator | 2025-06-01 05:04:56.701371 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-01 05:04:56.701380 | orchestrator | Sunday 01 June 2025 04:59:03 +0000 (0:00:00.764) 0:03:03.077 *********** 2025-06-01 05:04:56.701391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.701476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.701512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.701523 | orchestrator | 2025-06-01 05:04:56.701533 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-01 05:04:56.701543 | orchestrator | Sunday 01 June 2025 04:59:05 +0000 (0:00:02.558) 0:03:05.636 *********** 2025-06-01 05:04:56.701554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.701565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.701575 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.701586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.701631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.701652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.701665 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.701675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.701685 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.701695 | orchestrator | 2025-06-01 05:04:56.701705 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-01 05:04:56.701715 | orchestrator | Sunday 01 June 2025 04:59:06 +0000 (0:00:01.211) 0:03:06.847 *********** 2025-06-01 05:04:56.701725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.701742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.701752 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.701792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.701805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.701815 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.701826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.701842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.701852 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.701862 | orchestrator | 2025-06-01 05:04:56.701925 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-01 05:04:56.701936 | orchestrator | Sunday 01 June 2025 04:59:08 +0000 (0:00:01.283) 0:03:08.131 *********** 2025-06-01 05:04:56.701978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.701992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702121 | orchestrator | 2025-06-01 05:04:56.702131 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-01 05:04:56.702141 | orchestrator | Sunday 01 June 2025 04:59:10 +0000 (0:00:02.611) 0:03:10.743 *********** 2025-06-01 05:04:56.702152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702256 | orchestrator | 2025-06-01 05:04:56.702266 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-01 05:04:56.702276 | orchestrator | Sunday 01 June 2025 04:59:18 +0000 (0:00:07.226) 0:03:17.970 *********** 2025-06-01 05:04:56.702286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.702321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.702333 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.702343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.702354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.702375 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.702386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 05:04:56.702397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.702407 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.702417 | orchestrator | 2025-06-01 05:04:56.702426 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-01 05:04:56.702436 | orchestrator | Sunday 01 June 2025 04:59:19 +0000 (0:00:01.226) 0:03:19.196 *********** 2025-06-01 05:04:56.702446 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.702456 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.702464 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.702472 | orchestrator | 2025-06-01 05:04:56.702501 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-01 05:04:56.702510 | orchestrator | Sunday 01 June 2025 04:59:21 +0000 (0:00:02.428) 0:03:21.624 *********** 2025-06-01 05:04:56.702518 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.702526 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.702534 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.702542 | orchestrator | 2025-06-01 05:04:56.702550 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-01 05:04:56.702558 | orchestrator | Sunday 01 June 2025 04:59:22 +0000 (0:00:00.514) 0:03:22.138 *********** 2025-06-01 05:04:56.702566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 05:04:56.702623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.702656 | orchestrator | 2025-06-01 05:04:56.702664 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-01 05:04:56.702672 | orchestrator | Sunday 01 June 2025 04:59:24 +0000 (0:00:01.942) 0:03:24.081 *********** 2025-06-01 05:04:56.702680 | orchestrator | 2025-06-01 05:04:56.702688 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-01 05:04:56.702696 | orchestrator | Sunday 01 June 2025 04:59:24 +0000 (0:00:00.408) 0:03:24.490 *********** 2025-06-01 05:04:56.702704 | orchestrator | 2025-06-01 05:04:56.702712 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-01 05:04:56.702720 | orchestrator | Sunday 01 June 2025 04:59:24 +0000 (0:00:00.350) 0:03:24.841 *********** 2025-06-01 05:04:56.702728 | orchestrator | 2025-06-01 05:04:56.702736 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-01 05:04:56.702743 | orchestrator | Sunday 01 June 2025 04:59:25 +0000 (0:00:00.465) 0:03:25.306 *********** 2025-06-01 05:04:56.702751 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.702759 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.702767 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.702775 | orchestrator | 2025-06-01 05:04:56.702783 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-01 05:04:56.702791 | orchestrator | Sunday 01 June 2025 04:59:50 +0000 (0:00:25.133) 0:03:50.439 *********** 2025-06-01 05:04:56.702799 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.702807 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.702815 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.702823 | orchestrator | 2025-06-01 05:04:56.702830 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-01 05:04:56.702838 | orchestrator | 2025-06-01 05:04:56.702846 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 05:04:56.702854 | orchestrator | Sunday 01 June 2025 04:59:56 +0000 (0:00:06.088) 0:03:56.528 *********** 2025-06-01 05:04:56.702862 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.702888 | orchestrator | 2025-06-01 05:04:56.702896 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 05:04:56.702904 | orchestrator | Sunday 01 June 2025 04:59:57 +0000 (0:00:01.251) 0:03:57.779 *********** 2025-06-01 05:04:56.702912 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.702920 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.702928 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.702936 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.702944 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.702952 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.702960 | orchestrator | 2025-06-01 05:04:56.702968 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-01 05:04:56.702983 | orchestrator | Sunday 01 June 2025 04:59:59 +0000 (0:00:01.544) 0:03:59.324 *********** 2025-06-01 05:04:56.702991 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.702999 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.703007 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.703015 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 05:04:56.703023 | orchestrator | 2025-06-01 05:04:56.703031 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 05:04:56.703061 | orchestrator | Sunday 01 June 2025 05:00:01 +0000 (0:00:01.983) 0:04:01.308 *********** 2025-06-01 05:04:56.703070 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-01 05:04:56.703079 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-01 05:04:56.703086 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-01 05:04:56.703094 | orchestrator | 2025-06-01 05:04:56.703102 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 05:04:56.703110 | orchestrator | Sunday 01 June 2025 05:00:02 +0000 (0:00:01.017) 0:04:02.326 *********** 2025-06-01 05:04:56.703118 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-01 05:04:56.703126 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-01 05:04:56.703134 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-01 05:04:56.703142 | orchestrator | 2025-06-01 05:04:56.703150 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 05:04:56.703158 | orchestrator | Sunday 01 June 2025 05:00:03 +0000 (0:00:01.227) 0:04:03.554 *********** 2025-06-01 05:04:56.703165 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-01 05:04:56.703173 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.703181 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-01 05:04:56.703189 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.703197 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-01 05:04:56.703205 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.703212 | orchestrator | 2025-06-01 05:04:56.703220 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-01 05:04:56.703228 | orchestrator | Sunday 01 June 2025 05:00:04 +0000 (0:00:01.007) 0:04:04.561 *********** 2025-06-01 05:04:56.703236 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 05:04:56.703244 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 05:04:56.703252 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.703260 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 05:04:56.703268 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 05:04:56.703275 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.703283 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-01 05:04:56.703291 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-01 05:04:56.703299 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 05:04:56.703307 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 05:04:56.703315 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.703322 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-01 05:04:56.703330 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-01 05:04:56.703338 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-01 05:04:56.703346 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-01 05:04:56.703354 | orchestrator | 2025-06-01 05:04:56.703362 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-01 05:04:56.703375 | orchestrator | Sunday 01 June 2025 05:00:06 +0000 (0:00:02.088) 0:04:06.650 *********** 2025-06-01 05:04:56.703383 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.703403 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.703411 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.703419 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.703427 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.703434 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.703442 | orchestrator | 2025-06-01 05:04:56.703450 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-01 05:04:56.703458 | orchestrator | Sunday 01 June 2025 05:00:08 +0000 (0:00:01.520) 0:04:08.170 *********** 2025-06-01 05:04:56.703466 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.703474 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.703482 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.703489 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.703497 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.703505 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.703513 | orchestrator | 2025-06-01 05:04:56.703521 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-01 05:04:56.703529 | orchestrator | Sunday 01 June 2025 05:00:10 +0000 (0:00:02.405) 0:04:10.576 *********** 2025-06-01 05:04:56.703537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703845 | orchestrator | 2025-06-01 05:04:56.703853 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 05:04:56.703893 | orchestrator | Sunday 01 June 2025 05:00:15 +0000 (0:00:04.837) 0:04:15.413 *********** 2025-06-01 05:04:56.703903 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:04:56.703913 | orchestrator | 2025-06-01 05:04:56.703920 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-01 05:04:56.703928 | orchestrator | Sunday 01 June 2025 05:00:16 +0000 (0:00:01.472) 0:04:16.886 *********** 2025-06-01 05:04:56.703941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.703995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.704147 | orchestrator | 2025-06-01 05:04:56.704155 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-01 05:04:56.704163 | orchestrator | Sunday 01 June 2025 05:00:21 +0000 (0:00:04.189) 0:04:21.075 *********** 2025-06-01 05:04:56.704194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.704209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.704218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704226 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.704238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.704247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.704276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704286 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.704294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.704307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.704319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704328 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.704336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.704344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704352 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.704383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.704398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704406 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.704414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.704422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704431 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.704439 | orchestrator | 2025-06-01 05:04:56.704447 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-01 05:04:56.704455 | orchestrator | Sunday 01 June 2025 05:00:24 +0000 (0:00:03.599) 0:04:24.675 *********** 2025-06-01 05:04:56.704467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.704476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.704506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704521 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.704529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.704538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.704550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704559 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.704567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.704597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.704613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704621 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.704630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.704638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704646 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.704658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.704666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704675 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.704683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.704717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.704727 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.704735 | orchestrator | 2025-06-01 05:04:56.704743 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 05:04:56.704751 | orchestrator | Sunday 01 June 2025 05:00:29 +0000 (0:00:05.104) 0:04:29.779 *********** 2025-06-01 05:04:56.704759 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.704767 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.704775 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.704783 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 05:04:56.704791 | orchestrator | 2025-06-01 05:04:56.704799 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-01 05:04:56.704807 | orchestrator | Sunday 01 June 2025 05:00:31 +0000 (0:00:01.181) 0:04:30.960 *********** 2025-06-01 05:04:56.704815 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 05:04:56.704823 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 05:04:56.704831 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 05:04:56.704838 | orchestrator | 2025-06-01 05:04:56.704846 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-01 05:04:56.704854 | orchestrator | Sunday 01 June 2025 05:00:33 +0000 (0:00:02.304) 0:04:33.265 *********** 2025-06-01 05:04:56.704862 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 05:04:56.704918 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 05:04:56.704928 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 05:04:56.704936 | orchestrator | 2025-06-01 05:04:56.704944 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-01 05:04:56.704952 | orchestrator | Sunday 01 June 2025 05:00:35 +0000 (0:00:01.664) 0:04:34.929 *********** 2025-06-01 05:04:56.704960 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:04:56.704968 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:04:56.704976 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:04:56.704984 | orchestrator | 2025-06-01 05:04:56.704992 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-01 05:04:56.705000 | orchestrator | Sunday 01 June 2025 05:00:35 +0000 (0:00:00.739) 0:04:35.669 *********** 2025-06-01 05:04:56.705008 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:04:56.705016 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:04:56.705024 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:04:56.705031 | orchestrator | 2025-06-01 05:04:56.705039 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-01 05:04:56.705047 | orchestrator | Sunday 01 June 2025 05:00:36 +0000 (0:00:00.418) 0:04:36.087 *********** 2025-06-01 05:04:56.705055 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-01 05:04:56.705063 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-01 05:04:56.705078 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-01 05:04:56.705087 | orchestrator | 2025-06-01 05:04:56.705093 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-01 05:04:56.705104 | orchestrator | Sunday 01 June 2025 05:00:37 +0000 (0:00:01.484) 0:04:37.572 *********** 2025-06-01 05:04:56.705111 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-01 05:04:56.705117 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-01 05:04:56.705124 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-01 05:04:56.705130 | orchestrator | 2025-06-01 05:04:56.705137 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-01 05:04:56.705146 | orchestrator | Sunday 01 June 2025 05:00:39 +0000 (0:00:01.861) 0:04:39.433 *********** 2025-06-01 05:04:56.705157 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-01 05:04:56.705172 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-01 05:04:56.705188 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-01 05:04:56.705197 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-01 05:04:56.705207 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-01 05:04:56.705217 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-01 05:04:56.705226 | orchestrator | 2025-06-01 05:04:56.705236 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-01 05:04:56.705246 | orchestrator | Sunday 01 June 2025 05:00:46 +0000 (0:00:06.528) 0:04:45.962 *********** 2025-06-01 05:04:56.705256 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.705266 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.705276 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.705286 | orchestrator | 2025-06-01 05:04:56.705297 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-01 05:04:56.705309 | orchestrator | Sunday 01 June 2025 05:00:46 +0000 (0:00:00.708) 0:04:46.670 *********** 2025-06-01 05:04:56.705320 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.705331 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.705341 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.705348 | orchestrator | 2025-06-01 05:04:56.705355 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-01 05:04:56.705361 | orchestrator | Sunday 01 June 2025 05:00:47 +0000 (0:00:00.695) 0:04:47.366 *********** 2025-06-01 05:04:56.705368 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.705375 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.705382 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.705388 | orchestrator | 2025-06-01 05:04:56.705423 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-01 05:04:56.705431 | orchestrator | Sunday 01 June 2025 05:00:49 +0000 (0:00:02.085) 0:04:49.451 *********** 2025-06-01 05:04:56.705439 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-01 05:04:56.705446 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-01 05:04:56.705453 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-01 05:04:56.705460 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-01 05:04:56.705467 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-01 05:04:56.705474 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-01 05:04:56.705481 | orchestrator | 2025-06-01 05:04:56.705488 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-01 05:04:56.705502 | orchestrator | Sunday 01 June 2025 05:00:52 +0000 (0:00:03.375) 0:04:52.827 *********** 2025-06-01 05:04:56.705508 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 05:04:56.705515 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 05:04:56.705522 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 05:04:56.705528 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 05:04:56.705535 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.705541 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 05:04:56.705548 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.705555 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 05:04:56.705561 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.705568 | orchestrator | 2025-06-01 05:04:56.705574 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-01 05:04:56.705581 | orchestrator | Sunday 01 June 2025 05:00:56 +0000 (0:00:03.967) 0:04:56.794 *********** 2025-06-01 05:04:56.705588 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.705594 | orchestrator | 2025-06-01 05:04:56.705601 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-01 05:04:56.705608 | orchestrator | Sunday 01 June 2025 05:00:57 +0000 (0:00:00.141) 0:04:56.936 *********** 2025-06-01 05:04:56.705614 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.705621 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.705627 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.705634 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.705640 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.705647 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.705654 | orchestrator | 2025-06-01 05:04:56.705660 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-01 05:04:56.705667 | orchestrator | Sunday 01 June 2025 05:00:57 +0000 (0:00:00.919) 0:04:57.855 *********** 2025-06-01 05:04:56.705674 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 05:04:56.705680 | orchestrator | 2025-06-01 05:04:56.705687 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-01 05:04:56.705701 | orchestrator | Sunday 01 June 2025 05:00:58 +0000 (0:00:00.612) 0:04:58.468 *********** 2025-06-01 05:04:56.705707 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.705714 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.705721 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.705727 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.705734 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.705740 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.705747 | orchestrator | 2025-06-01 05:04:56.705754 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-01 05:04:56.705760 | orchestrator | Sunday 01 June 2025 05:00:59 +0000 (0:00:00.600) 0:04:59.069 *********** 2025-06-01 05:04:56.705768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.705952 | orchestrator | 2025-06-01 05:04:56.705967 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-01 05:04:56.705979 | orchestrator | Sunday 01 June 2025 05:01:03 +0000 (0:00:04.586) 0:05:03.655 *********** 2025-06-01 05:04:56.705990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.706006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.706047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.706068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.706089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.706102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.706115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.706297 | orchestrator | 2025-06-01 05:04:56.706308 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-01 05:04:56.706320 | orchestrator | Sunday 01 June 2025 05:01:10 +0000 (0:00:07.072) 0:05:10.728 *********** 2025-06-01 05:04:56.706333 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.706345 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.706357 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.706368 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.706380 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.706391 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.706403 | orchestrator | 2025-06-01 05:04:56.706415 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-01 05:04:56.706427 | orchestrator | Sunday 01 June 2025 05:01:12 +0000 (0:00:01.915) 0:05:12.644 *********** 2025-06-01 05:04:56.706439 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-01 05:04:56.706451 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-01 05:04:56.706463 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-01 05:04:56.706475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-01 05:04:56.706492 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-01 05:04:56.706504 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-01 05:04:56.706516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-01 05:04:56.706529 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.706541 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-01 05:04:56.706553 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.706565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-01 05:04:56.706576 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.706589 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-01 05:04:56.706601 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-01 05:04:56.706613 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-01 05:04:56.706625 | orchestrator | 2025-06-01 05:04:56.706637 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-01 05:04:56.706649 | orchestrator | Sunday 01 June 2025 05:01:17 +0000 (0:00:04.602) 0:05:17.247 *********** 2025-06-01 05:04:56.706661 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.706672 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.706684 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.706696 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.706707 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.706720 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.706732 | orchestrator | 2025-06-01 05:04:56.706744 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-01 05:04:56.706755 | orchestrator | Sunday 01 June 2025 05:01:18 +0000 (0:00:00.835) 0:05:18.083 *********** 2025-06-01 05:04:56.706767 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-01 05:04:56.706779 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-01 05:04:56.706791 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-01 05:04:56.706809 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-01 05:04:56.706822 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-01 05:04:56.706834 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-01 05:04:56.706846 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-01 05:04:56.706857 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-01 05:04:56.706893 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-01 05:04:56.706905 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-01 05:04:56.706917 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.706929 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-01 05:04:56.706940 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.706951 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-01 05:04:56.706962 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-01 05:04:56.706973 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.706985 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-01 05:04:56.706996 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-01 05:04:56.707007 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-01 05:04:56.707019 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-01 05:04:56.707030 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-01 05:04:56.707041 | orchestrator | 2025-06-01 05:04:56.707052 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-01 05:04:56.707063 | orchestrator | Sunday 01 June 2025 05:01:23 +0000 (0:00:05.585) 0:05:23.669 *********** 2025-06-01 05:04:56.707074 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 05:04:56.707086 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 05:04:56.707102 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 05:04:56.707113 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-01 05:04:56.707125 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 05:04:56.707136 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 05:04:56.707148 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-01 05:04:56.707159 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 05:04:56.707170 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-01 05:04:56.707181 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 05:04:56.707192 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 05:04:56.707203 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-01 05:04:56.707222 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.707234 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 05:04:56.707245 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 05:04:56.707256 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 05:04:56.707267 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-01 05:04:56.707278 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.707290 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 05:04:56.707300 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-01 05:04:56.707311 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.707322 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 05:04:56.707333 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 05:04:56.707344 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 05:04:56.707356 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 05:04:56.707367 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 05:04:56.707377 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 05:04:56.707389 | orchestrator | 2025-06-01 05:04:56.707400 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-01 05:04:56.707411 | orchestrator | Sunday 01 June 2025 05:01:31 +0000 (0:00:07.711) 0:05:31.380 *********** 2025-06-01 05:04:56.707422 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.707434 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.707445 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.707457 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.707468 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.707479 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.707490 | orchestrator | 2025-06-01 05:04:56.707509 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-01 05:04:56.707522 | orchestrator | Sunday 01 June 2025 05:01:32 +0000 (0:00:00.546) 0:05:31.926 *********** 2025-06-01 05:04:56.707533 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.707545 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.707556 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.707566 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.707578 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.707589 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.707599 | orchestrator | 2025-06-01 05:04:56.707611 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-01 05:04:56.707623 | orchestrator | Sunday 01 June 2025 05:01:32 +0000 (0:00:00.871) 0:05:32.798 *********** 2025-06-01 05:04:56.707634 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.707645 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.707656 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.707667 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.707678 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.707689 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.707700 | orchestrator | 2025-06-01 05:04:56.707710 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-01 05:04:56.707721 | orchestrator | Sunday 01 June 2025 05:01:34 +0000 (0:00:01.816) 0:05:34.614 *********** 2025-06-01 05:04:56.707739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.707758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.707770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.707782 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.707794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.707810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.707822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.707847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 05:04:56.707859 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.707884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 05:04:56.707897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.707908 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.707925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.707938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.707957 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.707969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.707986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.707998 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.708010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 05:04:56.708022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 05:04:56.708034 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.708046 | orchestrator | 2025-06-01 05:04:56.708057 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-01 05:04:56.708068 | orchestrator | Sunday 01 June 2025 05:01:37 +0000 (0:00:02.739) 0:05:37.354 *********** 2025-06-01 05:04:56.708079 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-01 05:04:56.708091 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-01 05:04:56.708102 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.708113 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-01 05:04:56.708124 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-01 05:04:56.708136 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-01 05:04:56.708148 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-01 05:04:56.708159 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.708170 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-01 05:04:56.708181 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-01 05:04:56.708192 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.708208 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-01 05:04:56.708219 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-01 05:04:56.708240 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.708252 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.708262 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-01 05:04:56.708273 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-01 05:04:56.708285 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.708296 | orchestrator | 2025-06-01 05:04:56.708308 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-01 05:04:56.708319 | orchestrator | Sunday 01 June 2025 05:01:38 +0000 (0:00:01.105) 0:05:38.459 *********** 2025-06-01 05:04:56.708331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 05:04:56.708554 | orchestrator | 2025-06-01 05:04:56.708566 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 05:04:56.708577 | orchestrator | Sunday 01 June 2025 05:01:42 +0000 (0:00:03.865) 0:05:42.325 *********** 2025-06-01 05:04:56.708589 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.708600 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.708611 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.708622 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.708634 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.708644 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.708655 | orchestrator | 2025-06-01 05:04:56.708667 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 05:04:56.708679 | orchestrator | Sunday 01 June 2025 05:01:42 +0000 (0:00:00.547) 0:05:42.873 *********** 2025-06-01 05:04:56.708696 | orchestrator | 2025-06-01 05:04:56.708707 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 05:04:56.708718 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:00.326) 0:05:43.199 *********** 2025-06-01 05:04:56.708729 | orchestrator | 2025-06-01 05:04:56.708741 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 05:04:56.708752 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:00.134) 0:05:43.334 *********** 2025-06-01 05:04:56.708764 | orchestrator | 2025-06-01 05:04:56.708775 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 05:04:56.708786 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:00.126) 0:05:43.460 *********** 2025-06-01 05:04:56.708797 | orchestrator | 2025-06-01 05:04:56.708808 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 05:04:56.708818 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:00.118) 0:05:43.578 *********** 2025-06-01 05:04:56.708830 | orchestrator | 2025-06-01 05:04:56.708841 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 05:04:56.708852 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:00.121) 0:05:43.700 *********** 2025-06-01 05:04:56.708863 | orchestrator | 2025-06-01 05:04:56.708892 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-01 05:04:56.708903 | orchestrator | Sunday 01 June 2025 05:01:43 +0000 (0:00:00.122) 0:05:43.823 *********** 2025-06-01 05:04:56.708920 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.708930 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.708942 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.708952 | orchestrator | 2025-06-01 05:04:56.708959 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-01 05:04:56.708966 | orchestrator | Sunday 01 June 2025 05:01:50 +0000 (0:00:06.917) 0:05:50.740 *********** 2025-06-01 05:04:56.708972 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.708979 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.708985 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.708992 | orchestrator | 2025-06-01 05:04:56.708999 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-01 05:04:56.709005 | orchestrator | Sunday 01 June 2025 05:02:07 +0000 (0:00:16.910) 0:06:07.651 *********** 2025-06-01 05:04:56.709012 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.709019 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.709025 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.709032 | orchestrator | 2025-06-01 05:04:56.709038 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-01 05:04:56.709045 | orchestrator | Sunday 01 June 2025 05:02:35 +0000 (0:00:27.635) 0:06:35.287 *********** 2025-06-01 05:04:56.709052 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.709058 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.709065 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.709071 | orchestrator | 2025-06-01 05:04:56.709078 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-01 05:04:56.709085 | orchestrator | Sunday 01 June 2025 05:03:20 +0000 (0:00:44.677) 0:07:19.965 *********** 2025-06-01 05:04:56.709091 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.709098 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.709105 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.709111 | orchestrator | 2025-06-01 05:04:56.709118 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-01 05:04:56.709124 | orchestrator | Sunday 01 June 2025 05:03:21 +0000 (0:00:01.085) 0:07:21.050 *********** 2025-06-01 05:04:56.709131 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.709138 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.709144 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.709151 | orchestrator | 2025-06-01 05:04:56.709158 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-01 05:04:56.709169 | orchestrator | Sunday 01 June 2025 05:03:21 +0000 (0:00:00.814) 0:07:21.865 *********** 2025-06-01 05:04:56.709182 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:04:56.709189 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:04:56.709195 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:04:56.709202 | orchestrator | 2025-06-01 05:04:56.709209 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-01 05:04:56.709215 | orchestrator | Sunday 01 June 2025 05:03:47 +0000 (0:00:25.419) 0:07:47.284 *********** 2025-06-01 05:04:56.709222 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.709229 | orchestrator | 2025-06-01 05:04:56.709235 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-01 05:04:56.709242 | orchestrator | Sunday 01 June 2025 05:03:47 +0000 (0:00:00.123) 0:07:47.407 *********** 2025-06-01 05:04:56.709248 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.709255 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.709262 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.709268 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.709275 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.709282 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-01 05:04:56.709288 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-01 05:04:56.709295 | orchestrator | 2025-06-01 05:04:56.709302 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-01 05:04:56.709308 | orchestrator | Sunday 01 June 2025 05:04:08 +0000 (0:00:21.356) 0:08:08.764 *********** 2025-06-01 05:04:56.709315 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.709322 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.709328 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.709335 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.709341 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.709348 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.709355 | orchestrator | 2025-06-01 05:04:56.709361 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-01 05:04:56.709368 | orchestrator | Sunday 01 June 2025 05:04:17 +0000 (0:00:08.924) 0:08:17.689 *********** 2025-06-01 05:04:56.709375 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.709381 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.709388 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.709394 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.709401 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.709408 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-01 05:04:56.709414 | orchestrator | 2025-06-01 05:04:56.709421 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-01 05:04:56.709428 | orchestrator | Sunday 01 June 2025 05:04:22 +0000 (0:00:04.276) 0:08:21.966 *********** 2025-06-01 05:04:56.709434 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-01 05:04:56.709441 | orchestrator | 2025-06-01 05:04:56.709447 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-01 05:04:56.709454 | orchestrator | Sunday 01 June 2025 05:04:33 +0000 (0:00:11.461) 0:08:33.427 *********** 2025-06-01 05:04:56.709461 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-01 05:04:56.709467 | orchestrator | 2025-06-01 05:04:56.709474 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-01 05:04:56.709480 | orchestrator | Sunday 01 June 2025 05:04:34 +0000 (0:00:01.335) 0:08:34.763 *********** 2025-06-01 05:04:56.709487 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.709494 | orchestrator | 2025-06-01 05:04:56.709506 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-01 05:04:56.709513 | orchestrator | Sunday 01 June 2025 05:04:36 +0000 (0:00:01.345) 0:08:36.108 *********** 2025-06-01 05:04:56.709520 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-01 05:04:56.709531 | orchestrator | 2025-06-01 05:04:56.709538 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-01 05:04:56.709544 | orchestrator | Sunday 01 June 2025 05:04:46 +0000 (0:00:10.749) 0:08:46.858 *********** 2025-06-01 05:04:56.709551 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:04:56.709558 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:04:56.709564 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:04:56.709571 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:04:56.709578 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:04:56.709584 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:04:56.709591 | orchestrator | 2025-06-01 05:04:56.709597 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-01 05:04:56.709604 | orchestrator | 2025-06-01 05:04:56.709611 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-01 05:04:56.709617 | orchestrator | Sunday 01 June 2025 05:04:48 +0000 (0:00:01.679) 0:08:48.537 *********** 2025-06-01 05:04:56.709624 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:04:56.709631 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:04:56.709637 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:04:56.709644 | orchestrator | 2025-06-01 05:04:56.709651 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-01 05:04:56.709657 | orchestrator | 2025-06-01 05:04:56.709664 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-01 05:04:56.709670 | orchestrator | Sunday 01 June 2025 05:04:49 +0000 (0:00:01.162) 0:08:49.699 *********** 2025-06-01 05:04:56.709677 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.709684 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.709690 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.709697 | orchestrator | 2025-06-01 05:04:56.709704 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-01 05:04:56.709710 | orchestrator | 2025-06-01 05:04:56.709717 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-01 05:04:56.709723 | orchestrator | Sunday 01 June 2025 05:04:50 +0000 (0:00:00.545) 0:08:50.244 *********** 2025-06-01 05:04:56.709730 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-01 05:04:56.709740 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-01 05:04:56.709747 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-01 05:04:56.709754 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-01 05:04:56.709761 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-01 05:04:56.709767 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-01 05:04:56.709774 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:04:56.709781 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-01 05:04:56.709787 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-01 05:04:56.709794 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-01 05:04:56.709801 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-01 05:04:56.709807 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-01 05:04:56.709814 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-01 05:04:56.709820 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:04:56.709827 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-01 05:04:56.709834 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-01 05:04:56.709840 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-01 05:04:56.709847 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-01 05:04:56.709854 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-01 05:04:56.709860 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-01 05:04:56.709912 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:04:56.709919 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-01 05:04:56.709926 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-01 05:04:56.709933 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-01 05:04:56.709940 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-01 05:04:56.709946 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-01 05:04:56.709953 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-01 05:04:56.709959 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.709966 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-01 05:04:56.709973 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-01 05:04:56.709979 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-01 05:04:56.709986 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-01 05:04:56.709993 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-01 05:04:56.709999 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-01 05:04:56.710006 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.710012 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-01 05:04:56.710040 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-01 05:04:56.710047 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-01 05:04:56.710053 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-01 05:04:56.710059 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-01 05:04:56.710065 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-01 05:04:56.710071 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.710078 | orchestrator | 2025-06-01 05:04:56.710087 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-01 05:04:56.710094 | orchestrator | 2025-06-01 05:04:56.710100 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-01 05:04:56.710106 | orchestrator | Sunday 01 June 2025 05:04:51 +0000 (0:00:01.300) 0:08:51.545 *********** 2025-06-01 05:04:56.710112 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-01 05:04:56.710119 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-01 05:04:56.710125 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.710131 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-01 05:04:56.710137 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-01 05:04:56.710143 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.710149 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-01 05:04:56.710155 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-01 05:04:56.710162 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.710168 | orchestrator | 2025-06-01 05:04:56.710174 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-01 05:04:56.710180 | orchestrator | 2025-06-01 05:04:56.710186 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-01 05:04:56.710192 | orchestrator | Sunday 01 June 2025 05:04:52 +0000 (0:00:00.798) 0:08:52.343 *********** 2025-06-01 05:04:56.710199 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.710205 | orchestrator | 2025-06-01 05:04:56.710211 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-01 05:04:56.710217 | orchestrator | 2025-06-01 05:04:56.710224 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-01 05:04:56.710230 | orchestrator | Sunday 01 June 2025 05:04:53 +0000 (0:00:00.679) 0:08:53.023 *********** 2025-06-01 05:04:56.710236 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:04:56.710242 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:04:56.710253 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:04:56.710259 | orchestrator | 2025-06-01 05:04:56.710266 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:04:56.710272 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:04:56.710283 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-01 05:04:56.710290 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-01 05:04:56.710297 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-01 05:04:56.710303 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-01 05:04:56.710309 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-01 05:04:56.710315 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-01 05:04:56.710321 | orchestrator | 2025-06-01 05:04:56.710328 | orchestrator | 2025-06-01 05:04:56.710334 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:04:56.710340 | orchestrator | Sunday 01 June 2025 05:04:53 +0000 (0:00:00.423) 0:08:53.447 *********** 2025-06-01 05:04:56.710346 | orchestrator | =============================================================================== 2025-06-01 05:04:56.710352 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.68s 2025-06-01 05:04:56.710359 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.64s 2025-06-01 05:04:56.710365 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.88s 2025-06-01 05:04:56.710371 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.42s 2025-06-01 05:04:56.710378 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.13s 2025-06-01 05:04:56.710384 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.36s 2025-06-01 05:04:56.710390 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.91s 2025-06-01 05:04:56.710396 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.14s 2025-06-01 05:04:56.710402 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.91s 2025-06-01 05:04:56.710408 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.02s 2025-06-01 05:04:56.710414 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.73s 2025-06-01 05:04:56.710421 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.46s 2025-06-01 05:04:56.710427 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.75s 2025-06-01 05:04:56.710433 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.61s 2025-06-01 05:04:56.710439 | orchestrator | nova-cell : Create cell ------------------------------------------------- 9.71s 2025-06-01 05:04:56.710445 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.92s 2025-06-01 05:04:56.710454 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.38s 2025-06-01 05:04:56.710461 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.71s 2025-06-01 05:04:56.710467 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.47s 2025-06-01 05:04:56.710473 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 7.23s 2025-06-01 05:04:56.710484 | orchestrator | 2025-06-01 05:04:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:04:59.744235 | orchestrator | 2025-06-01 05:04:59 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:04:59.745239 | orchestrator | 2025-06-01 05:04:59 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state STARTED 2025-06-01 05:04:59.745276 | orchestrator | 2025-06-01 05:04:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:02.784043 | orchestrator | 2025-06-01 05:05:02 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:02.786063 | orchestrator | 2025-06-01 05:05:02 | INFO  | Task e85c9cf2-4470-4826-8a59-5cefd34ed71a is in state SUCCESS 2025-06-01 05:05:02.787660 | orchestrator | 2025-06-01 05:05:02.787701 | orchestrator | 2025-06-01 05:05:02.787710 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:05:02.787720 | orchestrator | 2025-06-01 05:05:02.787728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:05:02.787736 | orchestrator | Sunday 01 June 2025 05:02:44 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-01 05:05:02.787744 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:05:02.787753 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:05:02.787761 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:05:02.787768 | orchestrator | 2025-06-01 05:05:02.787777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:05:02.787812 | orchestrator | Sunday 01 June 2025 05:02:44 +0000 (0:00:00.310) 0:00:00.582 *********** 2025-06-01 05:05:02.787820 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-01 05:05:02.787828 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-01 05:05:02.787836 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-01 05:05:02.787843 | orchestrator | 2025-06-01 05:05:02.787851 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-01 05:05:02.787858 | orchestrator | 2025-06-01 05:05:02.787885 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-01 05:05:02.787893 | orchestrator | Sunday 01 June 2025 05:02:45 +0000 (0:00:00.541) 0:00:01.124 *********** 2025-06-01 05:05:02.787901 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:05:02.788187 | orchestrator | 2025-06-01 05:05:02.788199 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-01 05:05:02.788206 | orchestrator | Sunday 01 June 2025 05:02:45 +0000 (0:00:00.577) 0:00:01.702 *********** 2025-06-01 05:05:02.788218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.788229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.788270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.788278 | orchestrator | 2025-06-01 05:05:02.788285 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-01 05:05:02.788292 | orchestrator | Sunday 01 June 2025 05:02:46 +0000 (0:00:00.785) 0:00:02.487 *********** 2025-06-01 05:05:02.788299 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-01 05:05:02.788307 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-01 05:05:02.788314 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 05:05:02.788635 | orchestrator | 2025-06-01 05:05:02.788660 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-01 05:05:02.788667 | orchestrator | Sunday 01 June 2025 05:02:47 +0000 (0:00:00.793) 0:00:03.281 *********** 2025-06-01 05:05:02.788675 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:05:02.788682 | orchestrator | 2025-06-01 05:05:02.788688 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-01 05:05:02.788694 | orchestrator | Sunday 01 June 2025 05:02:48 +0000 (0:00:00.694) 0:00:03.975 *********** 2025-06-01 05:05:02.788732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.788741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.788749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.788755 | orchestrator | 2025-06-01 05:05:02.788762 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-01 05:05:02.788779 | orchestrator | Sunday 01 June 2025 05:02:49 +0000 (0:00:01.439) 0:00:05.415 *********** 2025-06-01 05:05:02.788786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 05:05:02.788793 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.788806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 05:05:02.788813 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.788838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 05:05:02.788845 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.788852 | orchestrator | 2025-06-01 05:05:02.788858 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-01 05:05:02.788865 | orchestrator | Sunday 01 June 2025 05:02:50 +0000 (0:00:00.369) 0:00:05.785 *********** 2025-06-01 05:05:02.788939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 05:05:02.788946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 05:05:02.788960 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.788967 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.788974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 05:05:02.788980 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.788987 | orchestrator | 2025-06-01 05:05:02.788993 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-01 05:05:02.788999 | orchestrator | Sunday 01 June 2025 05:02:50 +0000 (0:00:00.828) 0:00:06.613 *********** 2025-06-01 05:05:02.789010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.789017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.789048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.789056 | orchestrator | 2025-06-01 05:05:02.789062 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-01 05:05:02.789068 | orchestrator | Sunday 01 June 2025 05:02:52 +0000 (0:00:01.221) 0:00:07.835 *********** 2025-06-01 05:05:02.789074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.789087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.789093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.789100 | orchestrator | 2025-06-01 05:05:02.789106 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-01 05:05:02.789112 | orchestrator | Sunday 01 June 2025 05:02:53 +0000 (0:00:01.253) 0:00:09.088 *********** 2025-06-01 05:05:02.789119 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.789125 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.789131 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.789137 | orchestrator | 2025-06-01 05:05:02.789144 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-01 05:05:02.789150 | orchestrator | Sunday 01 June 2025 05:02:53 +0000 (0:00:00.567) 0:00:09.655 *********** 2025-06-01 05:05:02.789161 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-01 05:05:02.789168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-01 05:05:02.789175 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-01 05:05:02.789181 | orchestrator | 2025-06-01 05:05:02.789187 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-01 05:05:02.789193 | orchestrator | Sunday 01 June 2025 05:02:55 +0000 (0:00:01.299) 0:00:10.955 *********** 2025-06-01 05:05:02.789200 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-01 05:05:02.789207 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-01 05:05:02.789213 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-01 05:05:02.789219 | orchestrator | 2025-06-01 05:05:02.789226 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-01 05:05:02.789232 | orchestrator | Sunday 01 June 2025 05:02:56 +0000 (0:00:01.202) 0:00:12.158 *********** 2025-06-01 05:05:02.789259 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 05:05:02.789269 | orchestrator | 2025-06-01 05:05:02.789281 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-01 05:05:02.789292 | orchestrator | Sunday 01 June 2025 05:02:57 +0000 (0:00:00.735) 0:00:12.893 *********** 2025-06-01 05:05:02.789301 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-01 05:05:02.789311 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-01 05:05:02.789333 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:05:02.789344 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:05:02.789355 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:05:02.789364 | orchestrator | 2025-06-01 05:05:02.789375 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-01 05:05:02.789386 | orchestrator | Sunday 01 June 2025 05:02:57 +0000 (0:00:00.671) 0:00:13.565 *********** 2025-06-01 05:05:02.789397 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.789408 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.789419 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.789429 | orchestrator | 2025-06-01 05:05:02.789440 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-01 05:05:02.789451 | orchestrator | Sunday 01 June 2025 05:02:58 +0000 (0:00:00.539) 0:00:14.104 *********** 2025-06-01 05:05:02.789464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096915, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6326787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096915, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6326787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096915, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6326787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096910, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6256788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096910, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6256788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096910, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6256788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096907, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6216786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096907, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6216786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096907, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6216786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096913, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6296787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096913, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6296787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096913, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6296787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096903, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6176786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096903, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6176786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096903, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6176786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096908, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6226785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096908, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6226785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096908, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6226785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096912, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6286788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096912, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6286788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096912, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6286788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096902, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6176786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096902, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6176786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096902, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6176786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096897, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6116784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096897, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6116784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096897, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6116784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096904, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6186786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096904, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6186786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096904, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6186786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096899, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6156785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096899, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6156785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096899, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6156785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096911, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6266787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096911, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6266787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096911, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6266787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096905, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6206787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096905, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6206787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096905, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6206787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096914, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6296787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096914, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6296787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.789993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096914, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6296787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096901, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6166785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096901, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6166785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096901, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6166785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096909, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6246786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096909, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6246786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096909, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6246786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096898, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6146786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096898, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6146786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096898, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6146786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096900, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6166785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096900, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6166785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096900, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6166785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096906, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6216786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096906, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6216786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096906, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6216786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096933, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6536791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096933, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6536791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096933, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6536791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096924, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.644679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096924, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.644679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096924, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.644679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096917, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6336787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096917, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6336787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096917, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6336787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096961, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6616793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096961, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6616793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096961, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6616793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096918, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6346788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096918, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6346788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096918, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6346788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096953, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6586792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096953, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6586792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096953, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6586792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096965, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6656792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096965, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6656792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096965, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6656792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096944, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6556792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096944, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6556792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096944, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6556792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096952, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.657679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096952, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.657679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096952, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.657679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096919, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6356788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096919, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6356788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096919, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6356788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096925, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.645679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096925, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.645679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096925, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.645679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096973, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6666794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096973, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6666794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096973, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6666794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096958, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6596792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096958, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6596792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096958, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6596792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096921, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6386788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096921, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6386788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096921, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6386788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096920, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6356788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096920, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6356788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096920, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6356788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096922, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.640679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096922, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.640679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096922, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.640679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096923, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.643679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096923, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.643679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096928, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.646679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096923, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.643679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096928, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.646679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096948, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6566792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096928, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.646679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096948, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6566792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096931, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.646679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096948, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6566792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096931, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.646679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096976, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6676793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096931, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.646679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096976, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6676793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096976, 'dev': 167, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748747891.6676793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 05:05:02.790806 | orchestrator | 2025-06-01 05:05:02.790815 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-01 05:05:02.790823 | orchestrator | Sunday 01 June 2025 05:03:34 +0000 (0:00:36.419) 0:00:50.524 *********** 2025-06-01 05:05:02.790831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.790839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.790852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 05:05:02.790861 | orchestrator | 2025-06-01 05:05:02.790894 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-01 05:05:02.790901 | orchestrator | Sunday 01 June 2025 05:03:35 +0000 (0:00:00.938) 0:00:51.462 *********** 2025-06-01 05:05:02.790909 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:05:02.790916 | orchestrator | 2025-06-01 05:05:02.790922 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-01 05:05:02.790929 | orchestrator | Sunday 01 June 2025 05:03:37 +0000 (0:00:02.179) 0:00:53.642 *********** 2025-06-01 05:05:02.790936 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:05:02.790943 | orchestrator | 2025-06-01 05:05:02.790951 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-01 05:05:02.790958 | orchestrator | Sunday 01 June 2025 05:03:39 +0000 (0:00:02.041) 0:00:55.684 *********** 2025-06-01 05:05:02.790965 | orchestrator | 2025-06-01 05:05:02.790973 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-01 05:05:02.790986 | orchestrator | Sunday 01 June 2025 05:03:40 +0000 (0:00:00.293) 0:00:55.977 *********** 2025-06-01 05:05:02.790999 | orchestrator | 2025-06-01 05:05:02.791007 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-01 05:05:02.791015 | orchestrator | Sunday 01 June 2025 05:03:40 +0000 (0:00:00.081) 0:00:56.059 *********** 2025-06-01 05:05:02.791022 | orchestrator | 2025-06-01 05:05:02.791029 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-01 05:05:02.791036 | orchestrator | Sunday 01 June 2025 05:03:40 +0000 (0:00:00.064) 0:00:56.123 *********** 2025-06-01 05:05:02.791044 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.791052 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.791059 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:05:02.791066 | orchestrator | 2025-06-01 05:05:02.791072 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-01 05:05:02.791079 | orchestrator | Sunday 01 June 2025 05:03:42 +0000 (0:00:01.764) 0:00:57.888 *********** 2025-06-01 05:05:02.791085 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.791091 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.791098 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-01 05:05:02.791104 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-01 05:05:02.791112 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-01 05:05:02.791119 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:05:02.791126 | orchestrator | 2025-06-01 05:05:02.791133 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-01 05:05:02.791141 | orchestrator | Sunday 01 June 2025 05:04:20 +0000 (0:00:38.313) 0:01:36.202 *********** 2025-06-01 05:05:02.791148 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.791156 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:05:02.791163 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:05:02.791171 | orchestrator | 2025-06-01 05:05:02.791178 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-01 05:05:02.791185 | orchestrator | Sunday 01 June 2025 05:04:54 +0000 (0:00:34.225) 0:02:10.427 *********** 2025-06-01 05:05:02.791193 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:05:02.791201 | orchestrator | 2025-06-01 05:05:02.791208 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-01 05:05:02.791216 | orchestrator | Sunday 01 June 2025 05:04:57 +0000 (0:00:02.390) 0:02:12.818 *********** 2025-06-01 05:05:02.791223 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.791230 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:05:02.791238 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:05:02.791245 | orchestrator | 2025-06-01 05:05:02.791253 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-01 05:05:02.791260 | orchestrator | Sunday 01 June 2025 05:04:57 +0000 (0:00:00.301) 0:02:13.120 *********** 2025-06-01 05:05:02.791268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-01 05:05:02.791279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-01 05:05:02.791288 | orchestrator | 2025-06-01 05:05:02.791295 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-01 05:05:02.791303 | orchestrator | Sunday 01 June 2025 05:04:59 +0000 (0:00:02.310) 0:02:15.430 *********** 2025-06-01 05:05:02.791311 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:05:02.791318 | orchestrator | 2025-06-01 05:05:02.791331 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:05:02.791339 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 05:05:02.791353 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 05:05:02.791361 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 05:05:02.791368 | orchestrator | 2025-06-01 05:05:02.791375 | orchestrator | 2025-06-01 05:05:02.791383 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:05:02.791391 | orchestrator | Sunday 01 June 2025 05:04:59 +0000 (0:00:00.253) 0:02:15.684 *********** 2025-06-01 05:05:02.791398 | orchestrator | =============================================================================== 2025-06-01 05:05:02.791406 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.31s 2025-06-01 05:05:02.791413 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.42s 2025-06-01 05:05:02.791421 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.23s 2025-06-01 05:05:02.791428 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.39s 2025-06-01 05:05:02.791436 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.31s 2025-06-01 05:05:02.791448 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.18s 2025-06-01 05:05:02.791456 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.04s 2025-06-01 05:05:02.791463 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2025-06-01 05:05:02.791469 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.44s 2025-06-01 05:05:02.791476 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2025-06-01 05:05:02.791483 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.25s 2025-06-01 05:05:02.791489 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.22s 2025-06-01 05:05:02.791496 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.20s 2025-06-01 05:05:02.791502 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.94s 2025-06-01 05:05:02.791507 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.83s 2025-06-01 05:05:02.791513 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2025-06-01 05:05:02.791519 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.79s 2025-06-01 05:05:02.791525 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2025-06-01 05:05:02.791530 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-06-01 05:05:02.791536 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.67s 2025-06-01 05:05:02.791542 | orchestrator | 2025-06-01 05:05:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:05.824439 | orchestrator | 2025-06-01 05:05:05 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:05.824548 | orchestrator | 2025-06-01 05:05:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:08.865521 | orchestrator | 2025-06-01 05:05:08 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:08.865623 | orchestrator | 2025-06-01 05:05:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:11.905772 | orchestrator | 2025-06-01 05:05:11 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:11.905991 | orchestrator | 2025-06-01 05:05:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:14.951971 | orchestrator | 2025-06-01 05:05:14 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:14.952115 | orchestrator | 2025-06-01 05:05:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:17.992331 | orchestrator | 2025-06-01 05:05:17 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:17.992477 | orchestrator | 2025-06-01 05:05:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:21.053018 | orchestrator | 2025-06-01 05:05:21 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:21.053262 | orchestrator | 2025-06-01 05:05:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:24.100151 | orchestrator | 2025-06-01 05:05:24 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:24.100257 | orchestrator | 2025-06-01 05:05:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:27.145723 | orchestrator | 2025-06-01 05:05:27 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:27.145848 | orchestrator | 2025-06-01 05:05:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:30.190448 | orchestrator | 2025-06-01 05:05:30 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:30.190678 | orchestrator | 2025-06-01 05:05:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:33.234706 | orchestrator | 2025-06-01 05:05:33 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:33.234809 | orchestrator | 2025-06-01 05:05:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:36.278492 | orchestrator | 2025-06-01 05:05:36 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:36.278619 | orchestrator | 2025-06-01 05:05:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:39.319100 | orchestrator | 2025-06-01 05:05:39 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:39.319202 | orchestrator | 2025-06-01 05:05:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:42.362457 | orchestrator | 2025-06-01 05:05:42 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:42.362731 | orchestrator | 2025-06-01 05:05:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:45.417404 | orchestrator | 2025-06-01 05:05:45 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:45.417509 | orchestrator | 2025-06-01 05:05:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:48.461840 | orchestrator | 2025-06-01 05:05:48 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:48.462003 | orchestrator | 2025-06-01 05:05:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:51.509144 | orchestrator | 2025-06-01 05:05:51 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:51.509255 | orchestrator | 2025-06-01 05:05:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:54.552349 | orchestrator | 2025-06-01 05:05:54 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:54.552459 | orchestrator | 2025-06-01 05:05:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:05:57.593605 | orchestrator | 2025-06-01 05:05:57 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:05:57.593743 | orchestrator | 2025-06-01 05:05:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:00.644454 | orchestrator | 2025-06-01 05:06:00 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:00.644556 | orchestrator | 2025-06-01 05:06:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:03.688088 | orchestrator | 2025-06-01 05:06:03 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:03.688190 | orchestrator | 2025-06-01 05:06:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:06.741737 | orchestrator | 2025-06-01 05:06:06 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:06.741908 | orchestrator | 2025-06-01 05:06:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:09.778723 | orchestrator | 2025-06-01 05:06:09 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:09.778847 | orchestrator | 2025-06-01 05:06:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:12.821007 | orchestrator | 2025-06-01 05:06:12 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:12.821138 | orchestrator | 2025-06-01 05:06:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:15.867184 | orchestrator | 2025-06-01 05:06:15 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:15.867286 | orchestrator | 2025-06-01 05:06:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:18.911645 | orchestrator | 2025-06-01 05:06:18 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:18.911744 | orchestrator | 2025-06-01 05:06:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:21.964776 | orchestrator | 2025-06-01 05:06:21 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:21.964974 | orchestrator | 2025-06-01 05:06:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:25.026798 | orchestrator | 2025-06-01 05:06:25 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:25.026971 | orchestrator | 2025-06-01 05:06:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:28.071842 | orchestrator | 2025-06-01 05:06:28 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:28.072012 | orchestrator | 2025-06-01 05:06:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:31.118241 | orchestrator | 2025-06-01 05:06:31 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:31.118351 | orchestrator | 2025-06-01 05:06:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:34.161583 | orchestrator | 2025-06-01 05:06:34 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:34.161677 | orchestrator | 2025-06-01 05:06:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:37.214474 | orchestrator | 2025-06-01 05:06:37 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:37.214611 | orchestrator | 2025-06-01 05:06:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:40.265435 | orchestrator | 2025-06-01 05:06:40 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:40.265538 | orchestrator | 2025-06-01 05:06:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:43.323721 | orchestrator | 2025-06-01 05:06:43 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:43.323885 | orchestrator | 2025-06-01 05:06:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:46.370532 | orchestrator | 2025-06-01 05:06:46 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:46.370634 | orchestrator | 2025-06-01 05:06:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:49.421956 | orchestrator | 2025-06-01 05:06:49 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:49.422125 | orchestrator | 2025-06-01 05:06:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:52.462761 | orchestrator | 2025-06-01 05:06:52 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:52.462970 | orchestrator | 2025-06-01 05:06:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:55.514577 | orchestrator | 2025-06-01 05:06:55 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:55.514709 | orchestrator | 2025-06-01 05:06:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:06:58.557245 | orchestrator | 2025-06-01 05:06:58 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:06:58.557344 | orchestrator | 2025-06-01 05:06:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:01.602470 | orchestrator | 2025-06-01 05:07:01 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:01.603349 | orchestrator | 2025-06-01 05:07:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:04.645220 | orchestrator | 2025-06-01 05:07:04 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:04.645290 | orchestrator | 2025-06-01 05:07:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:07.702980 | orchestrator | 2025-06-01 05:07:07 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:07.703092 | orchestrator | 2025-06-01 05:07:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:10.751495 | orchestrator | 2025-06-01 05:07:10 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:10.751599 | orchestrator | 2025-06-01 05:07:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:13.790399 | orchestrator | 2025-06-01 05:07:13 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:13.790532 | orchestrator | 2025-06-01 05:07:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:16.841910 | orchestrator | 2025-06-01 05:07:16 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:16.841995 | orchestrator | 2025-06-01 05:07:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:19.871636 | orchestrator | 2025-06-01 05:07:19 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state STARTED 2025-06-01 05:07:19.871733 | orchestrator | 2025-06-01 05:07:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 05:07:22.917366 | orchestrator | 2025-06-01 05:07:22 | INFO  | Task ef95ba8b-762a-473a-aebe-8edb491f2ee3 is in state SUCCESS 2025-06-01 05:07:22.918745 | orchestrator | 2025-06-01 05:07:22.918785 | orchestrator | 2025-06-01 05:07:22.918797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:07:22.918808 | orchestrator | 2025-06-01 05:07:22.918818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:07:22.918868 | orchestrator | Sunday 01 June 2025 05:02:51 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-06-01 05:07:22.918881 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.918891 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:07:22.918921 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:07:22.918931 | orchestrator | 2025-06-01 05:07:22.918941 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:07:22.918951 | orchestrator | Sunday 01 June 2025 05:02:52 +0000 (0:00:00.270) 0:00:00.524 *********** 2025-06-01 05:07:22.918960 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-01 05:07:22.918971 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-01 05:07:22.918980 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-01 05:07:22.918990 | orchestrator | 2025-06-01 05:07:22.918999 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-01 05:07:22.919009 | orchestrator | 2025-06-01 05:07:22.919019 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 05:07:22.919028 | orchestrator | Sunday 01 June 2025 05:02:52 +0000 (0:00:00.424) 0:00:00.948 *********** 2025-06-01 05:07:22.919038 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:07:22.919048 | orchestrator | 2025-06-01 05:07:22.919091 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-01 05:07:22.919102 | orchestrator | Sunday 01 June 2025 05:02:52 +0000 (0:00:00.521) 0:00:01.469 *********** 2025-06-01 05:07:22.919112 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-01 05:07:22.919122 | orchestrator | 2025-06-01 05:07:22.919131 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-01 05:07:22.919141 | orchestrator | Sunday 01 June 2025 05:02:56 +0000 (0:00:03.184) 0:00:04.653 *********** 2025-06-01 05:07:22.919150 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-01 05:07:22.919160 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-01 05:07:22.919170 | orchestrator | 2025-06-01 05:07:22.919180 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-01 05:07:22.919190 | orchestrator | Sunday 01 June 2025 05:03:02 +0000 (0:00:06.134) 0:00:10.788 *********** 2025-06-01 05:07:22.919200 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 05:07:22.919210 | orchestrator | 2025-06-01 05:07:22.919219 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-01 05:07:22.919234 | orchestrator | Sunday 01 June 2025 05:03:05 +0000 (0:00:03.230) 0:00:14.018 *********** 2025-06-01 05:07:22.919251 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 05:07:22.919266 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-01 05:07:22.919282 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-01 05:07:22.919298 | orchestrator | 2025-06-01 05:07:22.919313 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-01 05:07:22.919329 | orchestrator | Sunday 01 June 2025 05:03:13 +0000 (0:00:07.977) 0:00:21.996 *********** 2025-06-01 05:07:22.919344 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 05:07:22.919359 | orchestrator | 2025-06-01 05:07:22.919375 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-01 05:07:22.919931 | orchestrator | Sunday 01 June 2025 05:03:16 +0000 (0:00:03.378) 0:00:25.374 *********** 2025-06-01 05:07:22.919965 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-01 05:07:22.919975 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-01 05:07:22.919985 | orchestrator | 2025-06-01 05:07:22.919995 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-01 05:07:22.920005 | orchestrator | Sunday 01 June 2025 05:03:24 +0000 (0:00:07.559) 0:00:32.933 *********** 2025-06-01 05:07:22.920014 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-01 05:07:22.920024 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-01 05:07:22.920045 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-01 05:07:22.920053 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-01 05:07:22.920061 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-01 05:07:22.920069 | orchestrator | 2025-06-01 05:07:22.920077 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 05:07:22.920085 | orchestrator | Sunday 01 June 2025 05:03:39 +0000 (0:00:14.892) 0:00:47.825 *********** 2025-06-01 05:07:22.920093 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:07:22.920101 | orchestrator | 2025-06-01 05:07:22.920109 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-01 05:07:22.920117 | orchestrator | Sunday 01 June 2025 05:03:39 +0000 (0:00:00.548) 0:00:48.374 *********** 2025-06-01 05:07:22.920125 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920133 | orchestrator | 2025-06-01 05:07:22.920334 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-01 05:07:22.920344 | orchestrator | Sunday 01 June 2025 05:03:44 +0000 (0:00:04.826) 0:00:53.200 *********** 2025-06-01 05:07:22.920352 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920360 | orchestrator | 2025-06-01 05:07:22.920368 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-01 05:07:22.920401 | orchestrator | Sunday 01 June 2025 05:03:49 +0000 (0:00:04.336) 0:00:57.537 *********** 2025-06-01 05:07:22.920411 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.920418 | orchestrator | 2025-06-01 05:07:22.920426 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-01 05:07:22.920440 | orchestrator | Sunday 01 June 2025 05:03:52 +0000 (0:00:03.168) 0:01:00.706 *********** 2025-06-01 05:07:22.920449 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-01 05:07:22.920457 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-01 05:07:22.920465 | orchestrator | 2025-06-01 05:07:22.920472 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-01 05:07:22.920480 | orchestrator | Sunday 01 June 2025 05:04:02 +0000 (0:00:10.695) 0:01:11.401 *********** 2025-06-01 05:07:22.920488 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-01 05:07:22.920496 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-01 05:07:22.920505 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-01 05:07:22.920514 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-01 05:07:22.920522 | orchestrator | 2025-06-01 05:07:22.920530 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-01 05:07:22.920538 | orchestrator | Sunday 01 June 2025 05:04:18 +0000 (0:00:15.499) 0:01:26.901 *********** 2025-06-01 05:07:22.920545 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920553 | orchestrator | 2025-06-01 05:07:22.920561 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-01 05:07:22.920569 | orchestrator | Sunday 01 June 2025 05:04:23 +0000 (0:00:04.626) 0:01:31.528 *********** 2025-06-01 05:07:22.920577 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920584 | orchestrator | 2025-06-01 05:07:22.920592 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-01 05:07:22.920600 | orchestrator | Sunday 01 June 2025 05:04:28 +0000 (0:00:05.078) 0:01:36.607 *********** 2025-06-01 05:07:22.920608 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.920616 | orchestrator | 2025-06-01 05:07:22.920623 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-01 05:07:22.920638 | orchestrator | Sunday 01 June 2025 05:04:28 +0000 (0:00:00.215) 0:01:36.822 *********** 2025-06-01 05:07:22.920646 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920653 | orchestrator | 2025-06-01 05:07:22.920661 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 05:07:22.920669 | orchestrator | Sunday 01 June 2025 05:04:33 +0000 (0:00:05.133) 0:01:41.955 *********** 2025-06-01 05:07:22.920677 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:07:22.920685 | orchestrator | 2025-06-01 05:07:22.920692 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-01 05:07:22.920700 | orchestrator | Sunday 01 June 2025 05:04:34 +0000 (0:00:01.225) 0:01:43.181 *********** 2025-06-01 05:07:22.920708 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920716 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.920724 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.920731 | orchestrator | 2025-06-01 05:07:22.920739 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-01 05:07:22.920747 | orchestrator | Sunday 01 June 2025 05:04:39 +0000 (0:00:05.126) 0:01:48.307 *********** 2025-06-01 05:07:22.920755 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920763 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.920771 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.920779 | orchestrator | 2025-06-01 05:07:22.920786 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-01 05:07:22.920794 | orchestrator | Sunday 01 June 2025 05:04:44 +0000 (0:00:04.808) 0:01:53.115 *********** 2025-06-01 05:07:22.920802 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920812 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.920855 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.920869 | orchestrator | 2025-06-01 05:07:22.920882 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-01 05:07:22.920895 | orchestrator | Sunday 01 June 2025 05:04:45 +0000 (0:00:00.822) 0:01:53.938 *********** 2025-06-01 05:07:22.920909 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:07:22.920918 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:07:22.920926 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.920934 | orchestrator | 2025-06-01 05:07:22.920941 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-01 05:07:22.920949 | orchestrator | Sunday 01 June 2025 05:04:47 +0000 (0:00:02.061) 0:01:56.000 *********** 2025-06-01 05:07:22.920957 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.920965 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.920973 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.920982 | orchestrator | 2025-06-01 05:07:22.920992 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-01 05:07:22.921001 | orchestrator | Sunday 01 June 2025 05:04:48 +0000 (0:00:01.287) 0:01:57.287 *********** 2025-06-01 05:07:22.921010 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.921019 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.921029 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.921039 | orchestrator | 2025-06-01 05:07:22.921048 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-01 05:07:22.921058 | orchestrator | Sunday 01 June 2025 05:04:49 +0000 (0:00:01.180) 0:01:58.467 *********** 2025-06-01 05:07:22.921068 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.921076 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.921084 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.921092 | orchestrator | 2025-06-01 05:07:22.921126 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-01 05:07:22.921136 | orchestrator | Sunday 01 June 2025 05:04:52 +0000 (0:00:02.097) 0:02:00.565 *********** 2025-06-01 05:07:22.921144 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.921157 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.921171 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.921180 | orchestrator | 2025-06-01 05:07:22.921187 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-01 05:07:22.921195 | orchestrator | Sunday 01 June 2025 05:04:53 +0000 (0:00:01.733) 0:02:02.298 *********** 2025-06-01 05:07:22.921203 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.921211 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:07:22.921219 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:07:22.921227 | orchestrator | 2025-06-01 05:07:22.921234 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-01 05:07:22.921242 | orchestrator | Sunday 01 June 2025 05:04:54 +0000 (0:00:00.678) 0:02:02.977 *********** 2025-06-01 05:07:22.921250 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.921258 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:07:22.921266 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:07:22.921274 | orchestrator | 2025-06-01 05:07:22.921282 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 05:07:22.921290 | orchestrator | Sunday 01 June 2025 05:04:58 +0000 (0:00:03.648) 0:02:06.626 *********** 2025-06-01 05:07:22.921298 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:07:22.921305 | orchestrator | 2025-06-01 05:07:22.921313 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-01 05:07:22.921321 | orchestrator | Sunday 01 June 2025 05:04:58 +0000 (0:00:00.730) 0:02:07.356 *********** 2025-06-01 05:07:22.921329 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.921337 | orchestrator | 2025-06-01 05:07:22.921345 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-01 05:07:22.921353 | orchestrator | Sunday 01 June 2025 05:05:02 +0000 (0:00:03.811) 0:02:11.167 *********** 2025-06-01 05:07:22.921361 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.921368 | orchestrator | 2025-06-01 05:07:22.921376 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-01 05:07:22.921384 | orchestrator | Sunday 01 June 2025 05:05:05 +0000 (0:00:02.963) 0:02:14.131 *********** 2025-06-01 05:07:22.921392 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-01 05:07:22.921400 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-01 05:07:22.921408 | orchestrator | 2025-06-01 05:07:22.921416 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-01 05:07:22.921424 | orchestrator | Sunday 01 June 2025 05:05:12 +0000 (0:00:06.411) 0:02:20.543 *********** 2025-06-01 05:07:22.921431 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.921439 | orchestrator | 2025-06-01 05:07:22.921447 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-01 05:07:22.921455 | orchestrator | Sunday 01 June 2025 05:05:15 +0000 (0:00:03.224) 0:02:23.768 *********** 2025-06-01 05:07:22.921463 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:07:22.921471 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:07:22.921478 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:07:22.921486 | orchestrator | 2025-06-01 05:07:22.921494 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-01 05:07:22.921502 | orchestrator | Sunday 01 June 2025 05:05:15 +0000 (0:00:00.334) 0:02:24.103 *********** 2025-06-01 05:07:22.921513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.921547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.921561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.921570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.921579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.921587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.921596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.921740 | orchestrator | 2025-06-01 05:07:22.921748 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-01 05:07:22.921756 | orchestrator | Sunday 01 June 2025 05:05:18 +0000 (0:00:02.588) 0:02:26.692 *********** 2025-06-01 05:07:22.921764 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.921772 | orchestrator | 2025-06-01 05:07:22.921811 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-01 05:07:22.921826 | orchestrator | Sunday 01 June 2025 05:05:18 +0000 (0:00:00.338) 0:02:27.030 *********** 2025-06-01 05:07:22.921860 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.921878 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:07:22.921892 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:07:22.921907 | orchestrator | 2025-06-01 05:07:22.921921 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-01 05:07:22.921935 | orchestrator | Sunday 01 June 2025 05:05:18 +0000 (0:00:00.317) 0:02:27.347 *********** 2025-06-01 05:07:22.921949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.921958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.921967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.921981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.921990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.921999 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.922087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.922110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.922125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.922178 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:07:22.922192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.922245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.922263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.922313 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:07:22.922327 | orchestrator | 2025-06-01 05:07:22.922342 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 05:07:22.922356 | orchestrator | Sunday 01 June 2025 05:05:19 +0000 (0:00:00.762) 0:02:28.110 *********** 2025-06-01 05:07:22.922368 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:07:22.922382 | orchestrator | 2025-06-01 05:07:22.922397 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-01 05:07:22.922411 | orchestrator | Sunday 01 June 2025 05:05:20 +0000 (0:00:00.526) 0:02:28.636 *********** 2025-06-01 05:07:22.922425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.922474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.922491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.922505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.922527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.922542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.922556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.922720 | orchestrator | 2025-06-01 05:07:22.922734 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-01 05:07:22.922747 | orchestrator | Sunday 01 June 2025 05:05:25 +0000 (0:00:05.158) 0:02:33.794 *********** 2025-06-01 05:07:22.922766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.922788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.922802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.922876 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.922903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.922918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.922939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.922967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.922980 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:07:22.922993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.923015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.923035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.923090 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:07:22.923104 | orchestrator | 2025-06-01 05:07:22.923117 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-01 05:07:22.923131 | orchestrator | Sunday 01 June 2025 05:05:26 +0000 (0:00:00.706) 0:02:34.500 *********** 2025-06-01 05:07:22.923146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.923161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.923175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.923241 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.923255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.923269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.923284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.923332 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:07:22.923341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 05:07:22.923349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 05:07:22.923357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 05:07:22.923374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 05:07:22.923382 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:07:22.923390 | orchestrator | 2025-06-01 05:07:22.923398 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-01 05:07:22.923406 | orchestrator | Sunday 01 June 2025 05:05:26 +0000 (0:00:00.882) 0:02:35.383 *********** 2025-06-01 05:07:22.923427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.923436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.923445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.923453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.923462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.923470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.923487 | orchestrator | changed: [testbed-no2025-06-01 05:07:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:22.923499 | orchestrator | de-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923588 | orchestrator | 2025-06-01 05:07:22.923596 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-01 05:07:22.923604 | orchestrator | Sunday 01 June 2025 05:05:32 +0000 (0:00:05.252) 0:02:40.636 *********** 2025-06-01 05:07:22.923612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-01 05:07:22.923620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-01 05:07:22.923628 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-01 05:07:22.923636 | orchestrator | 2025-06-01 05:07:22.923644 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-01 05:07:22.923652 | orchestrator | Sunday 01 June 2025 05:05:33 +0000 (0:00:01.591) 0:02:42.227 *********** 2025-06-01 05:07:22.923660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.923668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.923689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.923698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.923706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.923714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.923723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.923895 | orchestrator | 2025-06-01 05:07:22.923904 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-01 05:07:22.923912 | orchestrator | Sunday 01 June 2025 05:05:50 +0000 (0:00:16.328) 0:02:58.556 *********** 2025-06-01 05:07:22.923920 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.923928 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.923936 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.923943 | orchestrator | 2025-06-01 05:07:22.923951 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-01 05:07:22.923959 | orchestrator | Sunday 01 June 2025 05:05:51 +0000 (0:00:01.543) 0:03:00.099 *********** 2025-06-01 05:07:22.923973 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.923981 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.923989 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924001 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924009 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924017 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924025 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924033 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924041 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924049 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924056 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924064 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924072 | orchestrator | 2025-06-01 05:07:22.924080 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-01 05:07:22.924088 | orchestrator | Sunday 01 June 2025 05:05:56 +0000 (0:00:05.365) 0:03:05.464 *********** 2025-06-01 05:07:22.924096 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924103 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924111 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924119 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924127 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924135 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924142 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924150 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924158 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924166 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924173 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924181 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924197 | orchestrator | 2025-06-01 05:07:22.924210 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-01 05:07:22.924223 | orchestrator | Sunday 01 June 2025 05:06:01 +0000 (0:00:04.976) 0:03:10.441 *********** 2025-06-01 05:07:22.924237 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924250 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924263 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-01 05:07:22.924276 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924291 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924304 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-01 05:07:22.924316 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924324 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924332 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-01 05:07:22.924339 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924347 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924355 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-01 05:07:22.924363 | orchestrator | 2025-06-01 05:07:22.924371 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-01 05:07:22.924379 | orchestrator | Sunday 01 June 2025 05:06:07 +0000 (0:00:05.090) 0:03:15.531 *********** 2025-06-01 05:07:22.924387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.924406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.924419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 05:07:22.924442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.924456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.924471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 05:07:22.924485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 05:07:22.924637 | orchestrator | 2025-06-01 05:07:22.924652 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 05:07:22.924665 | orchestrator | Sunday 01 June 2025 05:06:10 +0000 (0:00:03.425) 0:03:18.957 *********** 2025-06-01 05:07:22.924679 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:07:22.924693 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:07:22.924707 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:07:22.924728 | orchestrator | 2025-06-01 05:07:22.924743 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-01 05:07:22.924756 | orchestrator | Sunday 01 June 2025 05:06:10 +0000 (0:00:00.325) 0:03:19.282 *********** 2025-06-01 05:07:22.924769 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.924783 | orchestrator | 2025-06-01 05:07:22.924797 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-01 05:07:22.924810 | orchestrator | Sunday 01 June 2025 05:06:12 +0000 (0:00:01.916) 0:03:21.198 *********** 2025-06-01 05:07:22.924825 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.924860 | orchestrator | 2025-06-01 05:07:22.924874 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-01 05:07:22.924888 | orchestrator | Sunday 01 June 2025 05:06:15 +0000 (0:00:02.495) 0:03:23.694 *********** 2025-06-01 05:07:22.924901 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.924915 | orchestrator | 2025-06-01 05:07:22.924928 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-01 05:07:22.924941 | orchestrator | Sunday 01 June 2025 05:06:17 +0000 (0:00:02.205) 0:03:25.899 *********** 2025-06-01 05:07:22.924955 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.924969 | orchestrator | 2025-06-01 05:07:22.924982 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-01 05:07:22.924996 | orchestrator | Sunday 01 June 2025 05:06:19 +0000 (0:00:02.141) 0:03:28.041 *********** 2025-06-01 05:07:22.925009 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.925023 | orchestrator | 2025-06-01 05:07:22.925037 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-01 05:07:22.925050 | orchestrator | Sunday 01 June 2025 05:06:39 +0000 (0:00:19.587) 0:03:47.628 *********** 2025-06-01 05:07:22.925063 | orchestrator | 2025-06-01 05:07:22.925077 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-01 05:07:22.925090 | orchestrator | Sunday 01 June 2025 05:06:39 +0000 (0:00:00.074) 0:03:47.703 *********** 2025-06-01 05:07:22.925103 | orchestrator | 2025-06-01 05:07:22.925117 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-01 05:07:22.925131 | orchestrator | Sunday 01 June 2025 05:06:39 +0000 (0:00:00.065) 0:03:47.768 *********** 2025-06-01 05:07:22.925145 | orchestrator | 2025-06-01 05:07:22.925158 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-01 05:07:22.925171 | orchestrator | Sunday 01 June 2025 05:06:39 +0000 (0:00:00.069) 0:03:47.837 *********** 2025-06-01 05:07:22.925185 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.925199 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.925212 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.925226 | orchestrator | 2025-06-01 05:07:22.925239 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-01 05:07:22.925252 | orchestrator | Sunday 01 June 2025 05:06:54 +0000 (0:00:15.099) 0:04:02.936 *********** 2025-06-01 05:07:22.925265 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.925279 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.925293 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.925307 | orchestrator | 2025-06-01 05:07:22.925320 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-01 05:07:22.925333 | orchestrator | Sunday 01 June 2025 05:07:05 +0000 (0:00:10.979) 0:04:13.915 *********** 2025-06-01 05:07:22.925346 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.925360 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.925374 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.925387 | orchestrator | 2025-06-01 05:07:22.925400 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-01 05:07:22.925415 | orchestrator | Sunday 01 June 2025 05:07:10 +0000 (0:00:05.241) 0:04:19.157 *********** 2025-06-01 05:07:22.925428 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.925442 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.925463 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.925477 | orchestrator | 2025-06-01 05:07:22.925490 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-01 05:07:22.925504 | orchestrator | Sunday 01 June 2025 05:07:15 +0000 (0:00:05.259) 0:04:24.416 *********** 2025-06-01 05:07:22.925518 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:07:22.925531 | orchestrator | changed: [testbed-node-1] 2025-06-01 05:07:22.925542 | orchestrator | changed: [testbed-node-2] 2025-06-01 05:07:22.925554 | orchestrator | 2025-06-01 05:07:22.925567 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:07:22.925579 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 05:07:22.925592 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 05:07:22.925605 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 05:07:22.925617 | orchestrator | 2025-06-01 05:07:22.925629 | orchestrator | 2025-06-01 05:07:22.925643 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:07:22.925664 | orchestrator | Sunday 01 June 2025 05:07:20 +0000 (0:00:05.016) 0:04:29.433 *********** 2025-06-01 05:07:22.925679 | orchestrator | =============================================================================== 2025-06-01 05:07:22.925698 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.59s 2025-06-01 05:07:22.925712 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.33s 2025-06-01 05:07:22.925725 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.50s 2025-06-01 05:07:22.925737 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.10s 2025-06-01 05:07:22.925751 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.89s 2025-06-01 05:07:22.925763 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.98s 2025-06-01 05:07:22.925776 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.70s 2025-06-01 05:07:22.925790 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.98s 2025-06-01 05:07:22.925803 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.56s 2025-06-01 05:07:22.925817 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.41s 2025-06-01 05:07:22.925877 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.13s 2025-06-01 05:07:22.925894 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.37s 2025-06-01 05:07:22.925906 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.26s 2025-06-01 05:07:22.925914 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.25s 2025-06-01 05:07:22.925922 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.24s 2025-06-01 05:07:22.925930 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.16s 2025-06-01 05:07:22.925937 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.13s 2025-06-01 05:07:22.925945 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.13s 2025-06-01 05:07:22.925953 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.09s 2025-06-01 05:07:22.925961 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.08s 2025-06-01 05:07:25.964549 | orchestrator | 2025-06-01 05:07:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:29.011363 | orchestrator | 2025-06-01 05:07:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:32.056062 | orchestrator | 2025-06-01 05:07:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:35.104212 | orchestrator | 2025-06-01 05:07:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:38.146455 | orchestrator | 2025-06-01 05:07:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:41.183798 | orchestrator | 2025-06-01 05:07:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:44.224059 | orchestrator | 2025-06-01 05:07:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:47.272331 | orchestrator | 2025-06-01 05:07:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:50.314309 | orchestrator | 2025-06-01 05:07:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:53.355815 | orchestrator | 2025-06-01 05:07:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:56.397976 | orchestrator | 2025-06-01 05:07:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:07:59.443212 | orchestrator | 2025-06-01 05:07:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:02.485646 | orchestrator | 2025-06-01 05:08:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:05.528791 | orchestrator | 2025-06-01 05:08:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:08.571078 | orchestrator | 2025-06-01 05:08:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:11.618268 | orchestrator | 2025-06-01 05:08:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:14.655770 | orchestrator | 2025-06-01 05:08:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:17.704268 | orchestrator | 2025-06-01 05:08:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:20.747014 | orchestrator | 2025-06-01 05:08:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 05:08:23.792062 | orchestrator | 2025-06-01 05:08:24.144165 | orchestrator | 2025-06-01 05:08:24.150232 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jun 1 05:08:24 UTC 2025 2025-06-01 05:08:24.150291 | orchestrator | 2025-06-01 05:08:24.475101 | orchestrator | ok: Runtime: 1:32:44.114562 2025-06-01 05:08:24.738551 | 2025-06-01 05:08:24.738731 | TASK [Bootstrap services] 2025-06-01 05:08:25.500237 | orchestrator | 2025-06-01 05:08:25.500506 | orchestrator | # BOOTSTRAP 2025-06-01 05:08:25.500529 | orchestrator | 2025-06-01 05:08:25.500543 | orchestrator | + set -e 2025-06-01 05:08:25.500556 | orchestrator | + echo 2025-06-01 05:08:25.500569 | orchestrator | + echo '# BOOTSTRAP' 2025-06-01 05:08:25.500591 | orchestrator | + echo 2025-06-01 05:08:25.500672 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-01 05:08:25.507864 | orchestrator | + set -e 2025-06-01 05:08:25.507909 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-01 05:08:27.447351 | orchestrator | 2025-06-01 05:08:27 | INFO  | It takes a moment until task a1b06a76-e5c7-4264-93bf-e1f9da141b40 (flavor-manager) has been started and output is visible here. 2025-06-01 05:08:31.690184 | orchestrator | 2025-06-01 05:08:31 | INFO  | Flavor SCS-1V-4 created 2025-06-01 05:08:31.850693 | orchestrator | 2025-06-01 05:08:31 | INFO  | Flavor SCS-2V-8 created 2025-06-01 05:08:32.063541 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-4V-16 created 2025-06-01 05:08:32.230141 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-8V-32 created 2025-06-01 05:08:32.348654 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-1V-2 created 2025-06-01 05:08:32.485918 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-2V-4 created 2025-06-01 05:08:32.631604 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-4V-8 created 2025-06-01 05:08:32.766569 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-8V-16 created 2025-06-01 05:08:32.898496 | orchestrator | 2025-06-01 05:08:32 | INFO  | Flavor SCS-16V-32 created 2025-06-01 05:08:33.027730 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-1V-8 created 2025-06-01 05:08:33.158501 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-2V-16 created 2025-06-01 05:08:33.288698 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-4V-32 created 2025-06-01 05:08:33.432710 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-1L-1 created 2025-06-01 05:08:33.553136 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-2V-4-20s created 2025-06-01 05:08:33.679336 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-4V-16-100s created 2025-06-01 05:08:33.799548 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-1V-4-10 created 2025-06-01 05:08:33.927015 | orchestrator | 2025-06-01 05:08:33 | INFO  | Flavor SCS-2V-8-20 created 2025-06-01 05:08:34.097903 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-4V-16-50 created 2025-06-01 05:08:34.223388 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-8V-32-100 created 2025-06-01 05:08:34.347106 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-1V-2-5 created 2025-06-01 05:08:34.476286 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-2V-4-10 created 2025-06-01 05:08:34.619620 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-4V-8-20 created 2025-06-01 05:08:34.740558 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-8V-16-50 created 2025-06-01 05:08:34.882377 | orchestrator | 2025-06-01 05:08:34 | INFO  | Flavor SCS-16V-32-100 created 2025-06-01 05:08:35.018578 | orchestrator | 2025-06-01 05:08:35 | INFO  | Flavor SCS-1V-8-20 created 2025-06-01 05:08:35.153735 | orchestrator | 2025-06-01 05:08:35 | INFO  | Flavor SCS-2V-16-50 created 2025-06-01 05:08:35.285940 | orchestrator | 2025-06-01 05:08:35 | INFO  | Flavor SCS-4V-32-100 created 2025-06-01 05:08:35.397346 | orchestrator | 2025-06-01 05:08:35 | INFO  | Flavor SCS-1L-1-5 created 2025-06-01 05:08:37.687214 | orchestrator | 2025-06-01 05:08:37 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-01 05:08:37.691713 | orchestrator | Registering Redlock._acquired_script 2025-06-01 05:08:37.691776 | orchestrator | Registering Redlock._extend_script 2025-06-01 05:08:37.691869 | orchestrator | Registering Redlock._release_script 2025-06-01 05:08:37.753458 | orchestrator | 2025-06-01 05:08:37 | INFO  | Task 069df828-658d-44ea-8147-bfb071c45255 (bootstrap-basic) was prepared for execution. 2025-06-01 05:08:37.753556 | orchestrator | 2025-06-01 05:08:37 | INFO  | It takes a moment until task 069df828-658d-44ea-8147-bfb071c45255 (bootstrap-basic) has been started and output is visible here. 2025-06-01 05:08:41.935333 | orchestrator | 2025-06-01 05:08:41.936366 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-01 05:08:41.939342 | orchestrator | 2025-06-01 05:08:41.939963 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 05:08:41.940741 | orchestrator | Sunday 01 June 2025 05:08:41 +0000 (0:00:00.080) 0:00:00.080 *********** 2025-06-01 05:08:43.753659 | orchestrator | ok: [localhost] 2025-06-01 05:08:43.754511 | orchestrator | 2025-06-01 05:08:43.755310 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-01 05:08:43.756212 | orchestrator | Sunday 01 June 2025 05:08:43 +0000 (0:00:01.820) 0:00:01.901 *********** 2025-06-01 05:08:52.821065 | orchestrator | ok: [localhost] 2025-06-01 05:08:52.821245 | orchestrator | 2025-06-01 05:08:52.823981 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-01 05:08:52.824617 | orchestrator | Sunday 01 June 2025 05:08:52 +0000 (0:00:09.066) 0:00:10.967 *********** 2025-06-01 05:09:00.229610 | orchestrator | changed: [localhost] 2025-06-01 05:09:00.229722 | orchestrator | 2025-06-01 05:09:00.230310 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-01 05:09:00.230470 | orchestrator | Sunday 01 June 2025 05:09:00 +0000 (0:00:07.407) 0:00:18.374 *********** 2025-06-01 05:09:07.089341 | orchestrator | ok: [localhost] 2025-06-01 05:09:07.089673 | orchestrator | 2025-06-01 05:09:07.090757 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-01 05:09:07.091186 | orchestrator | Sunday 01 June 2025 05:09:07 +0000 (0:00:06.859) 0:00:25.233 *********** 2025-06-01 05:09:13.619917 | orchestrator | changed: [localhost] 2025-06-01 05:09:13.620456 | orchestrator | 2025-06-01 05:09:13.620774 | orchestrator | TASK [Create public network] *************************************************** 2025-06-01 05:09:13.621790 | orchestrator | Sunday 01 June 2025 05:09:13 +0000 (0:00:06.532) 0:00:31.765 *********** 2025-06-01 05:09:18.876093 | orchestrator | changed: [localhost] 2025-06-01 05:09:18.877290 | orchestrator | 2025-06-01 05:09:18.877772 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-01 05:09:18.879003 | orchestrator | Sunday 01 June 2025 05:09:18 +0000 (0:00:05.256) 0:00:37.022 *********** 2025-06-01 05:09:24.726057 | orchestrator | changed: [localhost] 2025-06-01 05:09:24.727402 | orchestrator | 2025-06-01 05:09:24.727825 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-01 05:09:24.729698 | orchestrator | Sunday 01 June 2025 05:09:24 +0000 (0:00:05.848) 0:00:42.871 *********** 2025-06-01 05:09:29.156951 | orchestrator | changed: [localhost] 2025-06-01 05:09:29.158543 | orchestrator | 2025-06-01 05:09:29.159423 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-01 05:09:29.160507 | orchestrator | Sunday 01 June 2025 05:09:29 +0000 (0:00:04.431) 0:00:47.302 *********** 2025-06-01 05:09:32.885638 | orchestrator | changed: [localhost] 2025-06-01 05:09:32.885876 | orchestrator | 2025-06-01 05:09:32.889014 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-01 05:09:32.889740 | orchestrator | Sunday 01 June 2025 05:09:32 +0000 (0:00:03.721) 0:00:51.024 *********** 2025-06-01 05:09:36.472128 | orchestrator | ok: [localhost] 2025-06-01 05:09:36.473415 | orchestrator | 2025-06-01 05:09:36.475020 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:09:36.477976 | orchestrator | 2025-06-01 05:09:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 05:09:36.479286 | orchestrator | 2025-06-01 05:09:36 | INFO  | Please wait and do not abort execution. 2025-06-01 05:09:36.483205 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 05:09:36.484501 | orchestrator | 2025-06-01 05:09:36.486517 | orchestrator | 2025-06-01 05:09:36.487712 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:09:36.488914 | orchestrator | Sunday 01 June 2025 05:09:36 +0000 (0:00:03.589) 0:00:54.614 *********** 2025-06-01 05:09:36.489535 | orchestrator | =============================================================================== 2025-06-01 05:09:36.490509 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.07s 2025-06-01 05:09:36.490974 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.41s 2025-06-01 05:09:36.491817 | orchestrator | Get volume type local --------------------------------------------------- 6.86s 2025-06-01 05:09:36.492483 | orchestrator | Create volume type local ------------------------------------------------ 6.53s 2025-06-01 05:09:36.493155 | orchestrator | Set public network to default ------------------------------------------- 5.85s 2025-06-01 05:09:36.494095 | orchestrator | Create public network --------------------------------------------------- 5.26s 2025-06-01 05:09:36.496783 | orchestrator | Create public subnet ---------------------------------------------------- 4.43s 2025-06-01 05:09:36.497565 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.72s 2025-06-01 05:09:36.497957 | orchestrator | Create manager role ----------------------------------------------------- 3.59s 2025-06-01 05:09:36.498630 | orchestrator | Gathering Facts --------------------------------------------------------- 1.82s 2025-06-01 05:09:38.916081 | orchestrator | 2025-06-01 05:09:38 | INFO  | It takes a moment until task a0b4e5c7-2875-40d6-b0a2-fca001f2d245 (image-manager) has been started and output is visible here. 2025-06-01 05:09:42.303406 | orchestrator | 2025-06-01 05:09:42 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-01 05:09:42.518551 | orchestrator | 2025-06-01 05:09:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-01 05:09:42.519569 | orchestrator | 2025-06-01 05:09:42 | INFO  | Importing image Cirros 0.6.2 2025-06-01 05:09:42.520587 | orchestrator | 2025-06-01 05:09:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-01 05:09:44.092872 | orchestrator | 2025-06-01 05:09:44 | INFO  | Waiting for image to leave queued state... 2025-06-01 05:09:46.144451 | orchestrator | 2025-06-01 05:09:46 | INFO  | Waiting for import to complete... 2025-06-01 05:09:56.451547 | orchestrator | 2025-06-01 05:09:56 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-01 05:09:56.646198 | orchestrator | 2025-06-01 05:09:56 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-01 05:09:56.646295 | orchestrator | 2025-06-01 05:09:56 | INFO  | Setting internal_version = 0.6.2 2025-06-01 05:09:56.647015 | orchestrator | 2025-06-01 05:09:56 | INFO  | Setting image_original_user = cirros 2025-06-01 05:09:56.648291 | orchestrator | 2025-06-01 05:09:56 | INFO  | Adding tag os:cirros 2025-06-01 05:09:56.855266 | orchestrator | 2025-06-01 05:09:56 | INFO  | Setting property architecture: x86_64 2025-06-01 05:09:57.140679 | orchestrator | 2025-06-01 05:09:57 | INFO  | Setting property hw_disk_bus: scsi 2025-06-01 05:09:57.346256 | orchestrator | 2025-06-01 05:09:57 | INFO  | Setting property hw_rng_model: virtio 2025-06-01 05:09:57.520500 | orchestrator | 2025-06-01 05:09:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-01 05:09:57.707695 | orchestrator | 2025-06-01 05:09:57 | INFO  | Setting property hw_watchdog_action: reset 2025-06-01 05:09:57.887397 | orchestrator | 2025-06-01 05:09:57 | INFO  | Setting property hypervisor_type: qemu 2025-06-01 05:09:58.066387 | orchestrator | 2025-06-01 05:09:58 | INFO  | Setting property os_distro: cirros 2025-06-01 05:09:58.268009 | orchestrator | 2025-06-01 05:09:58 | INFO  | Setting property replace_frequency: never 2025-06-01 05:09:58.477228 | orchestrator | 2025-06-01 05:09:58 | INFO  | Setting property uuid_validity: none 2025-06-01 05:09:58.694176 | orchestrator | 2025-06-01 05:09:58 | INFO  | Setting property provided_until: none 2025-06-01 05:09:58.905558 | orchestrator | 2025-06-01 05:09:58 | INFO  | Setting property image_description: Cirros 2025-06-01 05:09:59.122242 | orchestrator | 2025-06-01 05:09:59 | INFO  | Setting property image_name: Cirros 2025-06-01 05:09:59.317410 | orchestrator | 2025-06-01 05:09:59 | INFO  | Setting property internal_version: 0.6.2 2025-06-01 05:09:59.497310 | orchestrator | 2025-06-01 05:09:59 | INFO  | Setting property image_original_user: cirros 2025-06-01 05:09:59.742208 | orchestrator | 2025-06-01 05:09:59 | INFO  | Setting property os_version: 0.6.2 2025-06-01 05:09:59.935994 | orchestrator | 2025-06-01 05:09:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-01 05:10:00.145974 | orchestrator | 2025-06-01 05:10:00 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-01 05:10:00.388050 | orchestrator | 2025-06-01 05:10:00 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-01 05:10:00.388696 | orchestrator | 2025-06-01 05:10:00 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-01 05:10:00.389693 | orchestrator | 2025-06-01 05:10:00 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-01 05:10:00.567865 | orchestrator | 2025-06-01 05:10:00 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-01 05:10:00.777366 | orchestrator | 2025-06-01 05:10:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-01 05:10:00.778366 | orchestrator | 2025-06-01 05:10:00 | INFO  | Importing image Cirros 0.6.3 2025-06-01 05:10:00.779091 | orchestrator | 2025-06-01 05:10:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-01 05:10:01.875538 | orchestrator | 2025-06-01 05:10:01 | INFO  | Waiting for image to leave queued state... 2025-06-01 05:10:03.921563 | orchestrator | 2025-06-01 05:10:03 | INFO  | Waiting for import to complete... 2025-06-01 05:10:14.064835 | orchestrator | 2025-06-01 05:10:14 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-01 05:10:14.481488 | orchestrator | 2025-06-01 05:10:14 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-01 05:10:14.481574 | orchestrator | 2025-06-01 05:10:14 | INFO  | Setting internal_version = 0.6.3 2025-06-01 05:10:14.482314 | orchestrator | 2025-06-01 05:10:14 | INFO  | Setting image_original_user = cirros 2025-06-01 05:10:14.482834 | orchestrator | 2025-06-01 05:10:14 | INFO  | Adding tag os:cirros 2025-06-01 05:10:14.757056 | orchestrator | 2025-06-01 05:10:14 | INFO  | Setting property architecture: x86_64 2025-06-01 05:10:15.013715 | orchestrator | 2025-06-01 05:10:15 | INFO  | Setting property hw_disk_bus: scsi 2025-06-01 05:10:15.304224 | orchestrator | 2025-06-01 05:10:15 | INFO  | Setting property hw_rng_model: virtio 2025-06-01 05:10:15.472179 | orchestrator | 2025-06-01 05:10:15 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-01 05:10:15.659255 | orchestrator | 2025-06-01 05:10:15 | INFO  | Setting property hw_watchdog_action: reset 2025-06-01 05:10:15.880461 | orchestrator | 2025-06-01 05:10:15 | INFO  | Setting property hypervisor_type: qemu 2025-06-01 05:10:16.086552 | orchestrator | 2025-06-01 05:10:16 | INFO  | Setting property os_distro: cirros 2025-06-01 05:10:16.268129 | orchestrator | 2025-06-01 05:10:16 | INFO  | Setting property replace_frequency: never 2025-06-01 05:10:16.445668 | orchestrator | 2025-06-01 05:10:16 | INFO  | Setting property uuid_validity: none 2025-06-01 05:10:16.667604 | orchestrator | 2025-06-01 05:10:16 | INFO  | Setting property provided_until: none 2025-06-01 05:10:16.839612 | orchestrator | 2025-06-01 05:10:16 | INFO  | Setting property image_description: Cirros 2025-06-01 05:10:17.013262 | orchestrator | 2025-06-01 05:10:17 | INFO  | Setting property image_name: Cirros 2025-06-01 05:10:17.203705 | orchestrator | 2025-06-01 05:10:17 | INFO  | Setting property internal_version: 0.6.3 2025-06-01 05:10:17.425419 | orchestrator | 2025-06-01 05:10:17 | INFO  | Setting property image_original_user: cirros 2025-06-01 05:10:17.624230 | orchestrator | 2025-06-01 05:10:17 | INFO  | Setting property os_version: 0.6.3 2025-06-01 05:10:18.034989 | orchestrator | 2025-06-01 05:10:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-01 05:10:18.246323 | orchestrator | 2025-06-01 05:10:18 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-01 05:10:18.417821 | orchestrator | 2025-06-01 05:10:18 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-01 05:10:18.419881 | orchestrator | 2025-06-01 05:10:18 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-01 05:10:18.420474 | orchestrator | 2025-06-01 05:10:18 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-01 05:10:19.450605 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-01 05:10:21.338430 | orchestrator | 2025-06-01 05:10:21 | INFO  | date: 2025-06-01 2025-06-01 05:10:21.338529 | orchestrator | 2025-06-01 05:10:21 | INFO  | image: octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 05:10:21.338547 | orchestrator | 2025-06-01 05:10:21 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 05:10:21.338579 | orchestrator | 2025-06-01 05:10:21 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2.CHECKSUM 2025-06-01 05:10:21.358252 | orchestrator | 2025-06-01 05:10:21 | INFO  | checksum: 700471d784d62fa237f40333fe5c8c65dd56f28e7d4645bd524c044147a32271 2025-06-01 05:10:21.420056 | orchestrator | 2025-06-01 05:10:21 | INFO  | It takes a moment until task e3cfb131-b3ed-4186-a653-097bb3fe0d51 (image-manager) has been started and output is visible here. 2025-06-01 05:10:21.646579 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-01 05:10:21.646878 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-01 05:10:23.815084 | orchestrator | 2025-06-01 05:10:23 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 05:10:23.835089 | orchestrator | 2025-06-01 05:10:23 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2: 200 2025-06-01 05:10:23.835671 | orchestrator | 2025-06-01 05:10:23 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-01 2025-06-01 05:10:23.836749 | orchestrator | 2025-06-01 05:10:23 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 05:10:24.210078 | orchestrator | 2025-06-01 05:10:24 | INFO  | Waiting for image to leave queued state... 2025-06-01 05:10:26.258006 | orchestrator | 2025-06-01 05:10:26 | INFO  | Waiting for import to complete... 2025-06-01 05:10:36.543377 | orchestrator | 2025-06-01 05:10:36 | INFO  | Waiting for import to complete... 2025-06-01 05:10:46.638131 | orchestrator | 2025-06-01 05:10:46 | INFO  | Waiting for import to complete... 2025-06-01 05:10:56.739679 | orchestrator | 2025-06-01 05:10:56 | INFO  | Waiting for import to complete... 2025-06-01 05:11:06.825643 | orchestrator | 2025-06-01 05:11:06 | INFO  | Waiting for import to complete... 2025-06-01 05:11:16.973523 | orchestrator | 2025-06-01 05:11:16 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-01' successfully completed, reloading images 2025-06-01 05:11:17.316882 | orchestrator | 2025-06-01 05:11:17 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 05:11:17.317035 | orchestrator | 2025-06-01 05:11:17 | INFO  | Setting internal_version = 2025-06-01 2025-06-01 05:11:17.318489 | orchestrator | 2025-06-01 05:11:17 | INFO  | Setting image_original_user = ubuntu 2025-06-01 05:11:17.319942 | orchestrator | 2025-06-01 05:11:17 | INFO  | Adding tag amphora 2025-06-01 05:11:17.525150 | orchestrator | 2025-06-01 05:11:17 | INFO  | Adding tag os:ubuntu 2025-06-01 05:11:17.756006 | orchestrator | 2025-06-01 05:11:17 | INFO  | Setting property architecture: x86_64 2025-06-01 05:11:17.948058 | orchestrator | 2025-06-01 05:11:17 | INFO  | Setting property hw_disk_bus: scsi 2025-06-01 05:11:18.115732 | orchestrator | 2025-06-01 05:11:18 | INFO  | Setting property hw_rng_model: virtio 2025-06-01 05:11:18.322385 | orchestrator | 2025-06-01 05:11:18 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-01 05:11:18.518722 | orchestrator | 2025-06-01 05:11:18 | INFO  | Setting property hw_watchdog_action: reset 2025-06-01 05:11:18.714014 | orchestrator | 2025-06-01 05:11:18 | INFO  | Setting property hypervisor_type: qemu 2025-06-01 05:11:18.923651 | orchestrator | 2025-06-01 05:11:18 | INFO  | Setting property os_distro: ubuntu 2025-06-01 05:11:19.112145 | orchestrator | 2025-06-01 05:11:19 | INFO  | Setting property replace_frequency: quarterly 2025-06-01 05:11:19.306369 | orchestrator | 2025-06-01 05:11:19 | INFO  | Setting property uuid_validity: last-1 2025-06-01 05:11:19.516879 | orchestrator | 2025-06-01 05:11:19 | INFO  | Setting property provided_until: none 2025-06-01 05:11:19.724447 | orchestrator | 2025-06-01 05:11:19 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-01 05:11:19.944565 | orchestrator | 2025-06-01 05:11:19 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-01 05:11:20.151505 | orchestrator | 2025-06-01 05:11:20 | INFO  | Setting property internal_version: 2025-06-01 2025-06-01 05:11:20.335731 | orchestrator | 2025-06-01 05:11:20 | INFO  | Setting property image_original_user: ubuntu 2025-06-01 05:11:20.514799 | orchestrator | 2025-06-01 05:11:20 | INFO  | Setting property os_version: 2025-06-01 2025-06-01 05:11:20.727901 | orchestrator | 2025-06-01 05:11:20 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 05:11:20.938997 | orchestrator | 2025-06-01 05:11:20 | INFO  | Setting property image_build_date: 2025-06-01 2025-06-01 05:11:21.155665 | orchestrator | 2025-06-01 05:11:21 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 05:11:21.156016 | orchestrator | 2025-06-01 05:11:21 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 05:11:21.318863 | orchestrator | 2025-06-01 05:11:21 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-01 05:11:21.319022 | orchestrator | 2025-06-01 05:11:21 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-01 05:11:21.319892 | orchestrator | 2025-06-01 05:11:21 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-01 05:11:21.320454 | orchestrator | 2025-06-01 05:11:21 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-01 05:11:21.918389 | orchestrator | ok: Runtime: 0:02:56.635633 2025-06-01 05:11:21.984902 | 2025-06-01 05:11:21.985054 | TASK [Run checks] 2025-06-01 05:11:22.709938 | orchestrator | + set -e 2025-06-01 05:11:22.710227 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 05:11:22.710261 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 05:11:22.710295 | orchestrator | ++ INTERACTIVE=false 2025-06-01 05:11:22.710317 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 05:11:22.710337 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 05:11:22.710360 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-01 05:11:22.711542 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-01 05:11:22.718664 | orchestrator | 2025-06-01 05:11:22.718788 | orchestrator | # CHECK 2025-06-01 05:11:22.718807 | orchestrator | 2025-06-01 05:11:22.718821 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 05:11:22.718839 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 05:11:22.718850 | orchestrator | + echo 2025-06-01 05:11:22.718861 | orchestrator | + echo '# CHECK' 2025-06-01 05:11:22.718872 | orchestrator | + echo 2025-06-01 05:11:22.718887 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 05:11:22.719974 | orchestrator | ++ semver latest 5.0.0 2025-06-01 05:11:22.782089 | orchestrator | 2025-06-01 05:11:22.782196 | orchestrator | ## Containers @ testbed-manager 2025-06-01 05:11:22.782215 | orchestrator | 2025-06-01 05:11:22.782232 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-01 05:11:22.782244 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 05:11:22.782256 | orchestrator | + echo 2025-06-01 05:11:22.782267 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-01 05:11:22.782279 | orchestrator | + echo 2025-06-01 05:11:22.782290 | orchestrator | + osism container testbed-manager ps 2025-06-01 05:11:24.887533 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 05:11:24.887670 | orchestrator | 5893fa9c15a3 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-06-01 05:11:24.887709 | orchestrator | 42b42c8cf24c registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-06-01 05:11:24.887729 | orchestrator | 5a392fa53512 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-01 05:11:24.887741 | orchestrator | bb3713c85d07 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-01 05:11:24.887778 | orchestrator | 6e9bd32dd821 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-06-01 05:11:24.887796 | orchestrator | 4b36d49dbcf1 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2025-06-01 05:11:24.887808 | orchestrator | 70a41a340c09 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-01 05:11:24.887819 | orchestrator | b4edf4c6beeb registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-01 05:11:24.887831 | orchestrator | b26d50b19942 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-01 05:11:24.887869 | orchestrator | b389e4416f7a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2025-06-01 05:11:24.887881 | orchestrator | 661422b9f79e registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2025-06-01 05:11:24.887892 | orchestrator | cfea584fbe80 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2025-06-01 05:11:24.887903 | orchestrator | 8fccbb548ab8 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-01 05:11:24.887914 | orchestrator | e80baf547e8f registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2025-06-01 05:11:24.887925 | orchestrator | f869ab6ed26d registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2025-06-01 05:11:24.887956 | orchestrator | 2745c6cbc2f7 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2025-06-01 05:11:24.887974 | orchestrator | 6b09972f01d4 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2025-06-01 05:11:24.887986 | orchestrator | 53f610572156 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2025-06-01 05:11:24.887996 | orchestrator | 31b5eb39989c registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2025-06-01 05:11:24.888008 | orchestrator | 4fe834c447a6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2025-06-01 05:11:24.888019 | orchestrator | d9fd72ff78b6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2025-06-01 05:11:24.888030 | orchestrator | 6d0ec08b3be9 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2025-06-01 05:11:24.888040 | orchestrator | d7e054721af0 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2025-06-01 05:11:24.888051 | orchestrator | 55b2df0a160f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2025-06-01 05:11:24.888071 | orchestrator | 19eabfabb4fe registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-01 05:11:24.888082 | orchestrator | 79a8b2d4a8b0 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-watchdog-1 2025-06-01 05:11:24.888093 | orchestrator | ef1a59336ae6 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2025-06-01 05:11:24.888104 | orchestrator | bb644055fa97 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2025-06-01 05:11:24.888115 | orchestrator | 0abd1ced62a1 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-01 05:11:25.193647 | orchestrator | 2025-06-01 05:11:25.193776 | orchestrator | ## Images @ testbed-manager 2025-06-01 05:11:25.193793 | orchestrator | 2025-06-01 05:11:25.193806 | orchestrator | + echo 2025-06-01 05:11:25.193818 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-01 05:11:25.193830 | orchestrator | + echo 2025-06-01 05:11:25.193841 | orchestrator | + osism container testbed-manager images 2025-06-01 05:11:27.259619 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 05:11:27.259787 | orchestrator | registry.osism.tech/osism/homer v25.05.2 322317afcf13 2 hours ago 11.5MB 2025-06-01 05:11:27.259805 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f2fe5144a396 2 hours ago 225MB 2025-06-01 05:11:27.259817 | orchestrator | registry.osism.tech/osism/cephclient reef cbc3771a81fb 2 hours ago 454MB 2025-06-01 05:11:27.259828 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 32ebbc09103d 4 hours ago 629MB 2025-06-01 05:11:27.259861 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f4da4c70dc26 4 hours ago 747MB 2025-06-01 05:11:27.259874 | orchestrator | registry.osism.tech/kolla/cron 2024.2 993e54ecf44d 4 hours ago 319MB 2025-06-01 05:11:27.259885 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5ed8b7a82f00 4 hours ago 359MB 2025-06-01 05:11:27.259896 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 7f024965d3c9 4 hours ago 361MB 2025-06-01 05:11:27.259907 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 2a7133799b00 4 hours ago 411MB 2025-06-01 05:11:27.259918 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 fdd95edcb690 4 hours ago 457MB 2025-06-01 05:11:27.259928 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 704c11a7e1be 4 hours ago 892MB 2025-06-01 05:11:27.259939 | orchestrator | registry.osism.tech/osism/osism-ansible latest 55526329ea01 5 hours ago 577MB 2025-06-01 05:11:27.259950 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 9d5f98612de0 5 hours ago 574MB 2025-06-01 05:11:27.259960 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 49fa1c403405 5 hours ago 538MB 2025-06-01 05:11:27.259971 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest d1d178760c2c 5 hours ago 1.21GB 2025-06-01 05:11:27.260005 | orchestrator | registry.osism.tech/osism/osism latest 8933f5ca1a3e 5 hours ago 297MB 2025-06-01 05:11:27.260016 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 699ffae37ba8 5 hours ago 310MB 2025-06-01 05:11:27.260027 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 2 days ago 41.4MB 2025-06-01 05:11:27.260038 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 4 days ago 224MB 2025-06-01 05:11:27.260049 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-01 05:11:27.260059 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-01 05:11:27.260070 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-06-01 05:11:27.260081 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-01 05:11:27.534948 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 05:11:27.535738 | orchestrator | ++ semver latest 5.0.0 2025-06-01 05:11:27.590929 | orchestrator | 2025-06-01 05:11:27.591024 | orchestrator | ## Containers @ testbed-node-0 2025-06-01 05:11:27.591040 | orchestrator | 2025-06-01 05:11:27.591052 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-01 05:11:27.591063 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 05:11:27.591074 | orchestrator | + echo 2025-06-01 05:11:27.591085 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-01 05:11:27.591097 | orchestrator | + echo 2025-06-01 05:11:27.591108 | orchestrator | + osism container testbed-node-0 ps 2025-06-01 05:11:29.772040 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 05:11:29.772158 | orchestrator | 5e14072f5cb2 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-01 05:11:29.772177 | orchestrator | bbbcd4eacec8 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-01 05:11:29.772189 | orchestrator | a7e827578f89 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-01 05:11:29.772200 | orchestrator | 3828e32e40ba registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-01 05:11:29.772211 | orchestrator | d631b73ca7a2 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-01 05:11:29.772223 | orchestrator | 56747c3f4aaf registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-01 05:11:29.772233 | orchestrator | a5729c455d35 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-01 05:11:29.772244 | orchestrator | 1e2089aad177 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-01 05:11:29.772272 | orchestrator | 03e3c82f5d4d registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-01 05:11:29.772284 | orchestrator | bfd8e2d91742 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-01 05:11:29.772295 | orchestrator | 055e2e22ae9a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-01 05:11:29.772325 | orchestrator | 392e17b2e436 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-01 05:11:29.772336 | orchestrator | 19d573df914b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-01 05:11:29.772347 | orchestrator | 11317e55d5a6 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-01 05:11:29.772358 | orchestrator | 1ed4f6733b55 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-01 05:11:29.772368 | orchestrator | 018922827da6 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-06-01 05:11:29.772379 | orchestrator | 707cc88efd23 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-01 05:11:29.772390 | orchestrator | 40454e6c269d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-06-01 05:11:29.772401 | orchestrator | 7a64cdf4ea2c registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-01 05:11:29.772411 | orchestrator | 4576ded8397f registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-01 05:11:29.772422 | orchestrator | bd71ab4f1d51 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-01 05:11:29.772459 | orchestrator | 122eb82320ed registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-01 05:11:29.772471 | orchestrator | f6d95e06018e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-01 05:11:29.772482 | orchestrator | 598cf918b2ae registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-01 05:11:29.772493 | orchestrator | dae460ad3a3c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-01 05:11:29.772504 | orchestrator | 665ca1ca4e6d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-01 05:11:29.772514 | orchestrator | 028fa71cb503 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-01 05:11:29.772535 | orchestrator | af30184ba0c0 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-01 05:11:29.772547 | orchestrator | d27c52a82190 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-01 05:11:29.772558 | orchestrator | fa63ed5d657f registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-01 05:11:29.772577 | orchestrator | 8a13b41f366b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-01 05:11:29.772588 | orchestrator | 55dffee0acba registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-06-01 05:11:29.772599 | orchestrator | 3ad0807fdfa9 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-01 05:11:29.772610 | orchestrator | 6025306824b4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-01 05:11:29.772621 | orchestrator | baca0d2c9bdf registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-01 05:11:29.772631 | orchestrator | f87a8f3ebea0 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (unhealthy) horizon 2025-06-01 05:11:29.772642 | orchestrator | 4a5688655b69 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2025-06-01 05:11:29.772653 | orchestrator | 1ea66650b7b3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-01 05:11:29.772664 | orchestrator | d3c15b877c8f registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-01 05:11:29.772675 | orchestrator | 8ad389322105 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2025-06-01 05:11:29.772686 | orchestrator | 87ce36706eea registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-01 05:11:29.772696 | orchestrator | 1fee6495b0ca registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-01 05:11:29.772707 | orchestrator | 2ddeb7680d44 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-01 05:11:29.772725 | orchestrator | 6017d973340b registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-06-01 05:11:29.772736 | orchestrator | b3503f42db43 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-01 05:11:29.772747 | orchestrator | 5d085301beff registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-01 05:11:29.772790 | orchestrator | 7bae129368f1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-06-01 05:11:29.772802 | orchestrator | e13b5cc6903b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-06-01 05:11:29.772812 | orchestrator | 81cfe2d4fb4e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-01 05:11:29.772823 | orchestrator | ab8062a53c94 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-01 05:11:29.772840 | orchestrator | 2c073d389c49 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-01 05:11:29.772851 | orchestrator | 1b4f4b8111fc registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-01 05:11:29.772862 | orchestrator | bd26742e8815 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-01 05:11:29.772878 | orchestrator | 14d4c1b8ac10 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) memcached 2025-06-01 05:11:29.772889 | orchestrator | 7cbb59c23dc8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-01 05:11:29.772900 | orchestrator | 20c0b3671eaf registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes kolla_toolbox 2025-06-01 05:11:29.772911 | orchestrator | fc388349bdb2 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-01 05:11:30.039891 | orchestrator | 2025-06-01 05:11:30.040018 | orchestrator | ## Images @ testbed-node-0 2025-06-01 05:11:30.040044 | orchestrator | 2025-06-01 05:11:30.040057 | orchestrator | + echo 2025-06-01 05:11:30.040069 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-01 05:11:30.040082 | orchestrator | + echo 2025-06-01 05:11:30.040093 | orchestrator | + osism container testbed-node-0 images 2025-06-01 05:11:32.204273 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 05:11:32.204384 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 5af234df20cc 2 hours ago 1.27GB 2025-06-01 05:11:32.204400 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b9743cc0a7f6 4 hours ago 330MB 2025-06-01 05:11:32.204412 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 e3d062ebc33c 4 hours ago 1.01GB 2025-06-01 05:11:32.204423 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 32ebbc09103d 4 hours ago 629MB 2025-06-01 05:11:32.204433 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a320ec51a2b9 4 hours ago 419MB 2025-06-01 05:11:32.204444 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f5800a4656be 4 hours ago 1.55GB 2025-06-01 05:11:32.204454 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 178d34cce0ff 4 hours ago 1.59GB 2025-06-01 05:11:32.204465 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f4da4c70dc26 4 hours ago 747MB 2025-06-01 05:11:32.204475 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 06f9375cf986 4 hours ago 376MB 2025-06-01 05:11:32.204486 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 bdffde74ef72 4 hours ago 327MB 2025-06-01 05:11:32.204497 | orchestrator | registry.osism.tech/kolla/cron 2024.2 993e54ecf44d 4 hours ago 319MB 2025-06-01 05:11:32.204508 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 cdda7c2f0168 4 hours ago 319MB 2025-06-01 05:11:32.204518 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 13f6188cb9e5 4 hours ago 1.21GB 2025-06-01 05:11:32.204529 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5ed8b7a82f00 4 hours ago 359MB 2025-06-01 05:11:32.204539 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c0f221ee9695 4 hours ago 352MB 2025-06-01 05:11:32.204575 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 39b9dc9448eb 4 hours ago 354MB 2025-06-01 05:11:32.204587 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 2a7133799b00 4 hours ago 411MB 2025-06-01 05:11:32.204614 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 fb1b01f50484 4 hours ago 345MB 2025-06-01 05:11:32.204625 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 f045f4f01f80 4 hours ago 591MB 2025-06-01 05:11:32.204636 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 429ae22efbc7 4 hours ago 362MB 2025-06-01 05:11:32.204646 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 842513951a6d 4 hours ago 362MB 2025-06-01 05:11:32.204657 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 2a68da2fa13f 4 hours ago 325MB 2025-06-01 05:11:32.204667 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ff7ade953ec9 4 hours ago 326MB 2025-06-01 05:11:32.204678 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 9dd62c4384bc 4 hours ago 1.11GB 2025-06-01 05:11:32.204688 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 9124fd5deaf1 4 hours ago 1.12GB 2025-06-01 05:11:32.204699 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 904350615dac 4 hours ago 1.25GB 2025-06-01 05:11:32.204709 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 d35ce2a50365 4 hours ago 1.42GB 2025-06-01 05:11:32.204720 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 d13f632ab75c 4 hours ago 1.29GB 2025-06-01 05:11:32.204730 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 d373be916ca6 4 hours ago 1.3GB 2025-06-01 05:11:32.204741 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 41e606314cfe 4 hours ago 1.29GB 2025-06-01 05:11:32.204816 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1775a3afa66f 4 hours ago 1.41GB 2025-06-01 05:11:32.204839 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 7b0f959db40d 4 hours ago 1.41GB 2025-06-01 05:11:32.204859 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c0b785175579 4 hours ago 1.04GB 2025-06-01 05:11:32.204878 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 80e097072c0b 4 hours ago 1.04GB 2025-06-01 05:11:32.204897 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 8254a4fe4632 4 hours ago 1.04GB 2025-06-01 05:11:32.204912 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 74a2f1647f9b 4 hours ago 1.04GB 2025-06-01 05:11:32.204923 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 ccfeb490edc5 4 hours ago 1.04GB 2025-06-01 05:11:32.205037 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 5e2a187fb204 4 hours ago 1.11GB 2025-06-01 05:11:32.205055 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 054ed9578e90 4 hours ago 1.11GB 2025-06-01 05:11:32.205065 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5a712364313a 4 hours ago 1.13GB 2025-06-01 05:11:32.205079 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 382f8cb57351 4 hours ago 1.1GB 2025-06-01 05:11:32.205099 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 55e0bcb4827e 4 hours ago 1.1GB 2025-06-01 05:11:32.205111 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 1fa293b622ec 4 hours ago 1.12GB 2025-06-01 05:11:32.205122 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 f31a56430677 4 hours ago 1.1GB 2025-06-01 05:11:32.205135 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 8c28f1975edc 4 hours ago 1.12GB 2025-06-01 05:11:32.205174 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 db4a7764403e 4 hours ago 1.05GB 2025-06-01 05:11:32.205187 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 8afa9b7984b9 4 hours ago 1.05GB 2025-06-01 05:11:32.205197 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a798bf08fe30 4 hours ago 1.05GB 2025-06-01 05:11:32.205208 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f73039e3046a 4 hours ago 1.05GB 2025-06-01 05:11:32.205219 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 29579b505c5c 4 hours ago 1.06GB 2025-06-01 05:11:32.205229 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 5aa1c7c51ff9 4 hours ago 1.06GB 2025-06-01 05:11:32.205240 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 175ea7a5f5f6 4 hours ago 1.15GB 2025-06-01 05:11:32.205250 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 a29ce89ab8f4 4 hours ago 1.06GB 2025-06-01 05:11:32.205261 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 1be482c719ca 4 hours ago 1.06GB 2025-06-01 05:11:32.205271 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 2aaea623abef 4 hours ago 1.06GB 2025-06-01 05:11:32.205282 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 7e05a69de199 4 hours ago 1.04GB 2025-06-01 05:11:32.205293 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 bf228a7e02f2 4 hours ago 1.04GB 2025-06-01 05:11:32.205303 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1790864e17ae 4 hours ago 1.2GB 2025-06-01 05:11:32.205314 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 ff61ed1ae5ad 4 hours ago 1.31GB 2025-06-01 05:11:32.205324 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 729cb5ecf679 4 hours ago 947MB 2025-06-01 05:11:32.205335 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 625c3d7aade0 4 hours ago 948MB 2025-06-01 05:11:32.205346 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 307a17bafcee 4 hours ago 947MB 2025-06-01 05:11:32.205357 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b2e3c9e55b3f 4 hours ago 948MB 2025-06-01 05:11:32.599123 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 05:11:32.600141 | orchestrator | ++ semver latest 5.0.0 2025-06-01 05:11:32.669953 | orchestrator | 2025-06-01 05:11:32.670082 | orchestrator | ## Containers @ testbed-node-1 2025-06-01 05:11:32.670096 | orchestrator | 2025-06-01 05:11:32.670105 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-01 05:11:32.670114 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 05:11:32.670123 | orchestrator | + echo 2025-06-01 05:11:32.670132 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-01 05:11:32.670142 | orchestrator | + echo 2025-06-01 05:11:32.670151 | orchestrator | + osism container testbed-node-1 ps 2025-06-01 05:11:34.939359 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 05:11:34.939466 | orchestrator | 9dce9f894218 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-01 05:11:34.939480 | orchestrator | 81f73e19e194 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-01 05:11:34.939490 | orchestrator | dc64540632ee registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-01 05:11:34.939499 | orchestrator | 849f3da4fb49 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-01 05:11:34.939529 | orchestrator | a5fac4a3e011 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-01 05:11:34.939553 | orchestrator | 9f98319a4eff registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-06-01 05:11:34.939563 | orchestrator | 6be0f7feef25 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-01 05:11:34.939572 | orchestrator | 16522b307dc5 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-01 05:11:34.939581 | orchestrator | 2d6cfb2f9eaf registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-01 05:11:34.939589 | orchestrator | 65ff41135b04 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-01 05:11:34.939598 | orchestrator | 56a86f16692b registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-01 05:11:34.939607 | orchestrator | d83a28fe34b0 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-01 05:11:34.939615 | orchestrator | f950627c84ef registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-01 05:11:34.939624 | orchestrator | 0352140c177f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-01 05:11:34.939636 | orchestrator | a04bb3865613 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-01 05:11:34.939645 | orchestrator | 06114ab35506 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-01 05:11:34.939654 | orchestrator | cf04882195fc registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-01 05:11:34.939662 | orchestrator | c4ed2f18680e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-06-01 05:11:34.939671 | orchestrator | 347c3b66fbfc registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-01 05:11:34.939680 | orchestrator | 2bf670757fdb registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-01 05:11:34.939689 | orchestrator | a60572baf493 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-01 05:11:34.939714 | orchestrator | 5ceef958802e registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-01 05:11:34.939724 | orchestrator | b4d62cf813a2 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-01 05:11:34.939733 | orchestrator | 29bfeafcc1c5 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-01 05:11:34.939775 | orchestrator | a3438e07ec50 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-01 05:11:34.939807 | orchestrator | d582af91372c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-01 05:11:34.939821 | orchestrator | 1b4d044ededc registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-01 05:11:34.939830 | orchestrator | c97d6c99db96 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-01 05:11:34.939839 | orchestrator | dc5467e0765c registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-01 05:11:34.939848 | orchestrator | 4230235504a3 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-01 05:11:34.939857 | orchestrator | d21425278689 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-01 05:11:34.939865 | orchestrator | 291fad5c9aad registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-06-01 05:11:34.939874 | orchestrator | 8a4150864a97 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-01 05:11:34.939883 | orchestrator | f0e65f63b83d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-01 05:11:34.939891 | orchestrator | c0559ff96843 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (unhealthy) horizon 2025-06-01 05:11:34.939900 | orchestrator | d5cbcbea0caf registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-01 05:11:34.939908 | orchestrator | 5a226dfc4309 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-01 05:11:34.939917 | orchestrator | 4ac918a40a1b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-01 05:11:34.939925 | orchestrator | e0edb2ede297 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-01 05:11:34.939934 | orchestrator | 49c36c389913 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-06-01 05:11:34.939942 | orchestrator | b501e0cda2b2 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-01 05:11:34.939951 | orchestrator | ced1e6a27c3b registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-01 05:11:34.939960 | orchestrator | 97ca9dc8fc07 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-01 05:11:34.939980 | orchestrator | 675bce3e0f5e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-01 05:11:34.939990 | orchestrator | 3f1220b03fce registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-06-01 05:11:34.939999 | orchestrator | d1c27e37cd41 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-01 05:11:34.940007 | orchestrator | 8ef3e7290a8c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-01 05:11:34.940016 | orchestrator | ab3d80898634 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-01 05:11:34.940024 | orchestrator | 5f785eefa7ed registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-06-01 05:11:34.940033 | orchestrator | 155f7303e70f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-01 05:11:34.940045 | orchestrator | 5d03116be9fa registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-01 05:11:34.940054 | orchestrator | 2923d4a75015 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-01 05:11:34.940063 | orchestrator | dbdaca0881ad registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-01 05:11:34.940071 | orchestrator | a606efdbcf12 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-06-01 05:11:34.940080 | orchestrator | 07d0f1e1d5e3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-01 05:11:34.940089 | orchestrator | a3e24a744db4 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-01 05:11:34.940097 | orchestrator | 09f920f873f0 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-01 05:11:35.285914 | orchestrator | 2025-06-01 05:11:35.286077 | orchestrator | ## Images @ testbed-node-1 2025-06-01 05:11:35.286095 | orchestrator | 2025-06-01 05:11:35.286106 | orchestrator | + echo 2025-06-01 05:11:35.286117 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-01 05:11:35.286128 | orchestrator | + echo 2025-06-01 05:11:35.286138 | orchestrator | + osism container testbed-node-1 images 2025-06-01 05:11:37.458547 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 05:11:37.458668 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 5af234df20cc 2 hours ago 1.27GB 2025-06-01 05:11:37.458695 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b9743cc0a7f6 4 hours ago 330MB 2025-06-01 05:11:37.458712 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 e3d062ebc33c 4 hours ago 1.01GB 2025-06-01 05:11:37.458725 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 32ebbc09103d 4 hours ago 629MB 2025-06-01 05:11:37.458734 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a320ec51a2b9 4 hours ago 419MB 2025-06-01 05:11:37.458835 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f5800a4656be 4 hours ago 1.55GB 2025-06-01 05:11:37.458849 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 178d34cce0ff 4 hours ago 1.59GB 2025-06-01 05:11:37.458858 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f4da4c70dc26 4 hours ago 747MB 2025-06-01 05:11:37.458867 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 06f9375cf986 4 hours ago 376MB 2025-06-01 05:11:37.458875 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 bdffde74ef72 4 hours ago 327MB 2025-06-01 05:11:37.458884 | orchestrator | registry.osism.tech/kolla/cron 2024.2 993e54ecf44d 4 hours ago 319MB 2025-06-01 05:11:37.458892 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 cdda7c2f0168 4 hours ago 319MB 2025-06-01 05:11:37.458900 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 13f6188cb9e5 4 hours ago 1.21GB 2025-06-01 05:11:37.458909 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5ed8b7a82f00 4 hours ago 359MB 2025-06-01 05:11:37.458917 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c0f221ee9695 4 hours ago 352MB 2025-06-01 05:11:37.458925 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 39b9dc9448eb 4 hours ago 354MB 2025-06-01 05:11:37.458933 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 2a7133799b00 4 hours ago 411MB 2025-06-01 05:11:37.458942 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 fb1b01f50484 4 hours ago 345MB 2025-06-01 05:11:37.458950 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 f045f4f01f80 4 hours ago 591MB 2025-06-01 05:11:37.458959 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 429ae22efbc7 4 hours ago 362MB 2025-06-01 05:11:37.458967 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 842513951a6d 4 hours ago 362MB 2025-06-01 05:11:37.458975 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 2a68da2fa13f 4 hours ago 325MB 2025-06-01 05:11:37.458984 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ff7ade953ec9 4 hours ago 326MB 2025-06-01 05:11:37.458993 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 904350615dac 4 hours ago 1.25GB 2025-06-01 05:11:37.459001 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 d35ce2a50365 4 hours ago 1.42GB 2025-06-01 05:11:37.459010 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 d13f632ab75c 4 hours ago 1.29GB 2025-06-01 05:11:37.459018 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 d373be916ca6 4 hours ago 1.3GB 2025-06-01 05:11:37.459026 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 41e606314cfe 4 hours ago 1.29GB 2025-06-01 05:11:37.459036 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1775a3afa66f 4 hours ago 1.41GB 2025-06-01 05:11:37.459052 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 7b0f959db40d 4 hours ago 1.41GB 2025-06-01 05:11:37.459066 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c0b785175579 4 hours ago 1.04GB 2025-06-01 05:11:37.459081 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 5e2a187fb204 4 hours ago 1.11GB 2025-06-01 05:11:37.459116 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 054ed9578e90 4 hours ago 1.11GB 2025-06-01 05:11:37.459128 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5a712364313a 4 hours ago 1.13GB 2025-06-01 05:11:37.459139 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 382f8cb57351 4 hours ago 1.1GB 2025-06-01 05:11:37.459157 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 55e0bcb4827e 4 hours ago 1.1GB 2025-06-01 05:11:37.459168 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 1fa293b622ec 4 hours ago 1.12GB 2025-06-01 05:11:37.459201 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 f31a56430677 4 hours ago 1.1GB 2025-06-01 05:11:37.459215 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 8c28f1975edc 4 hours ago 1.12GB 2025-06-01 05:11:37.459228 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 db4a7764403e 4 hours ago 1.05GB 2025-06-01 05:11:37.459240 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 8afa9b7984b9 4 hours ago 1.05GB 2025-06-01 05:11:37.459252 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a798bf08fe30 4 hours ago 1.05GB 2025-06-01 05:11:37.459265 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f73039e3046a 4 hours ago 1.05GB 2025-06-01 05:11:37.459276 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 29579b505c5c 4 hours ago 1.06GB 2025-06-01 05:11:37.459287 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 5aa1c7c51ff9 4 hours ago 1.06GB 2025-06-01 05:11:37.459297 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 175ea7a5f5f6 4 hours ago 1.15GB 2025-06-01 05:11:37.459308 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 a29ce89ab8f4 4 hours ago 1.06GB 2025-06-01 05:11:37.459318 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 1be482c719ca 4 hours ago 1.06GB 2025-06-01 05:11:37.459329 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 2aaea623abef 4 hours ago 1.06GB 2025-06-01 05:11:37.459339 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1790864e17ae 4 hours ago 1.2GB 2025-06-01 05:11:37.459350 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 ff61ed1ae5ad 4 hours ago 1.31GB 2025-06-01 05:11:37.459360 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 729cb5ecf679 4 hours ago 947MB 2025-06-01 05:11:37.459371 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 625c3d7aade0 4 hours ago 948MB 2025-06-01 05:11:37.459381 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 307a17bafcee 4 hours ago 947MB 2025-06-01 05:11:37.459392 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b2e3c9e55b3f 4 hours ago 948MB 2025-06-01 05:11:37.732508 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 05:11:37.733322 | orchestrator | ++ semver latest 5.0.0 2025-06-01 05:11:37.791873 | orchestrator | 2025-06-01 05:11:37.791996 | orchestrator | ## Containers @ testbed-node-2 2025-06-01 05:11:37.792022 | orchestrator | 2025-06-01 05:11:37.792043 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-01 05:11:37.792060 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 05:11:37.792080 | orchestrator | + echo 2025-06-01 05:11:37.792098 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-01 05:11:37.792117 | orchestrator | + echo 2025-06-01 05:11:37.792134 | orchestrator | + osism container testbed-node-2 ps 2025-06-01 05:11:39.948531 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 05:11:39.948639 | orchestrator | 3ffc8458fd18 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-01 05:11:39.948656 | orchestrator | e216568f110a registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-01 05:11:39.948668 | orchestrator | e5a0d83f611e registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-01 05:11:39.948703 | orchestrator | 709aa42d2390 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-01 05:11:39.948715 | orchestrator | 603f3b69ea65 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-01 05:11:39.948726 | orchestrator | 6b693d657660 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-06-01 05:11:39.948738 | orchestrator | 5f98cb5a5c82 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-01 05:11:39.948795 | orchestrator | 0365d0a23507 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-01 05:11:39.948807 | orchestrator | 64de66f91bd7 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-01 05:11:39.948818 | orchestrator | a55ba8d6d082 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-01 05:11:39.948829 | orchestrator | 8e4c613c4412 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-01 05:11:39.948840 | orchestrator | bd131fd29456 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-01 05:11:39.948851 | orchestrator | 0d4af8448ee9 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-01 05:11:39.948861 | orchestrator | 4310bb653fa5 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-01 05:11:39.948889 | orchestrator | 8493e6bf52ad registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-01 05:11:39.948901 | orchestrator | cdd7a94d6262 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-01 05:11:39.948912 | orchestrator | f1729efbc1d5 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-01 05:11:39.948922 | orchestrator | c71b747103b3 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-06-01 05:11:39.948933 | orchestrator | 6c5d0a1100df registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-01 05:11:39.948944 | orchestrator | e5c6f19dd597 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-01 05:11:39.948954 | orchestrator | e396de55f9f1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-01 05:11:39.948983 | orchestrator | 001802136d28 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-01 05:11:39.949004 | orchestrator | 4483359d28d6 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-01 05:11:39.949020 | orchestrator | f7aceb400ed1 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-01 05:11:39.949032 | orchestrator | bec14c60eed4 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-01 05:11:39.949043 | orchestrator | 2a83a310f386 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-01 05:11:39.949055 | orchestrator | fd2ae932789d registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-01 05:11:39.949068 | orchestrator | 286cc1077c41 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-01 05:11:39.949081 | orchestrator | 9ecbe87f1b13 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-01 05:11:39.949095 | orchestrator | 7a82add84a84 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-01 05:11:39.949109 | orchestrator | afa9532cc88c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-01 05:11:39.949121 | orchestrator | ac6cd2064a39 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-06-01 05:11:39.949132 | orchestrator | ffba823991f0 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-01 05:11:39.949144 | orchestrator | 48166512c50f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-01 05:11:39.949154 | orchestrator | b4475697ba8d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (unhealthy) horizon 2025-06-01 05:11:39.949165 | orchestrator | 7c2e5d665b02 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-01 05:11:39.949176 | orchestrator | 0c82f019460b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-01 05:11:39.949187 | orchestrator | dc938bbbb6a5 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-01 05:11:39.949198 | orchestrator | a5c8a063ffde registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-01 05:11:39.949208 | orchestrator | fa28237134b5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-06-01 05:11:39.949219 | orchestrator | d6af93a850f1 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-01 05:11:39.949230 | orchestrator | 985aca5f0a50 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-01 05:11:39.949247 | orchestrator | 10e97dc3b499 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-01 05:11:39.949264 | orchestrator | 510d9ab353e0 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-01 05:11:39.949275 | orchestrator | f690bb683bb6 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-01 05:11:39.949286 | orchestrator | bf65c642209b registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-01 05:11:39.949297 | orchestrator | 0b50888c3e66 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-01 05:11:39.949308 | orchestrator | d64a4c120c00 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-01 05:11:39.949324 | orchestrator | a6c35d1f2602 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-06-01 05:11:39.949335 | orchestrator | 3fca6dae31c0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-01 05:11:39.949346 | orchestrator | 0ca6be50c09d registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-01 05:11:39.949357 | orchestrator | 438a4578d615 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-01 05:11:39.949367 | orchestrator | 75b245c772fc registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-06-01 05:11:39.949383 | orchestrator | e5775c9ef9f6 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-06-01 05:11:39.949402 | orchestrator | 4ec469941a28 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-01 05:11:39.949424 | orchestrator | a8afd1ec223a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-01 05:11:39.949443 | orchestrator | 8a56545401ff registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-01 05:11:40.264931 | orchestrator | 2025-06-01 05:11:40.265033 | orchestrator | ## Images @ testbed-node-2 2025-06-01 05:11:40.265057 | orchestrator | 2025-06-01 05:11:40.265066 | orchestrator | + echo 2025-06-01 05:11:40.265076 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-01 05:11:40.265086 | orchestrator | + echo 2025-06-01 05:11:40.265095 | orchestrator | + osism container testbed-node-2 images 2025-06-01 05:11:42.375068 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 05:11:42.375207 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 5af234df20cc 2 hours ago 1.27GB 2025-06-01 05:11:42.375235 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b9743cc0a7f6 4 hours ago 330MB 2025-06-01 05:11:42.375264 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 e3d062ebc33c 4 hours ago 1.01GB 2025-06-01 05:11:42.375323 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 32ebbc09103d 4 hours ago 629MB 2025-06-01 05:11:42.375344 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a320ec51a2b9 4 hours ago 419MB 2025-06-01 05:11:42.375363 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f5800a4656be 4 hours ago 1.55GB 2025-06-01 05:11:42.375381 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 178d34cce0ff 4 hours ago 1.59GB 2025-06-01 05:11:42.375399 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f4da4c70dc26 4 hours ago 747MB 2025-06-01 05:11:42.375417 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 06f9375cf986 4 hours ago 376MB 2025-06-01 05:11:42.375436 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 bdffde74ef72 4 hours ago 327MB 2025-06-01 05:11:42.375454 | orchestrator | registry.osism.tech/kolla/cron 2024.2 993e54ecf44d 4 hours ago 319MB 2025-06-01 05:11:42.375474 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 cdda7c2f0168 4 hours ago 319MB 2025-06-01 05:11:42.375493 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 13f6188cb9e5 4 hours ago 1.21GB 2025-06-01 05:11:42.375511 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5ed8b7a82f00 4 hours ago 359MB 2025-06-01 05:11:42.375529 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c0f221ee9695 4 hours ago 352MB 2025-06-01 05:11:42.375548 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 39b9dc9448eb 4 hours ago 354MB 2025-06-01 05:11:42.375566 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 2a7133799b00 4 hours ago 411MB 2025-06-01 05:11:42.375586 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 fb1b01f50484 4 hours ago 345MB 2025-06-01 05:11:42.375604 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 f045f4f01f80 4 hours ago 591MB 2025-06-01 05:11:42.375623 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 429ae22efbc7 4 hours ago 362MB 2025-06-01 05:11:42.375641 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 842513951a6d 4 hours ago 362MB 2025-06-01 05:11:42.375660 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 2a68da2fa13f 4 hours ago 325MB 2025-06-01 05:11:42.375680 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ff7ade953ec9 4 hours ago 326MB 2025-06-01 05:11:42.375698 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 904350615dac 4 hours ago 1.25GB 2025-06-01 05:11:42.375717 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 d35ce2a50365 4 hours ago 1.42GB 2025-06-01 05:11:42.375735 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 d13f632ab75c 4 hours ago 1.29GB 2025-06-01 05:11:42.375786 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 d373be916ca6 4 hours ago 1.3GB 2025-06-01 05:11:42.375807 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 41e606314cfe 4 hours ago 1.29GB 2025-06-01 05:11:42.375825 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1775a3afa66f 4 hours ago 1.41GB 2025-06-01 05:11:42.375843 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 7b0f959db40d 4 hours ago 1.41GB 2025-06-01 05:11:42.375862 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c0b785175579 4 hours ago 1.04GB 2025-06-01 05:11:42.375882 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 5e2a187fb204 4 hours ago 1.11GB 2025-06-01 05:11:42.375900 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 054ed9578e90 4 hours ago 1.11GB 2025-06-01 05:11:42.375930 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5a712364313a 4 hours ago 1.13GB 2025-06-01 05:11:42.375949 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 382f8cb57351 4 hours ago 1.1GB 2025-06-01 05:11:42.375969 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 55e0bcb4827e 4 hours ago 1.1GB 2025-06-01 05:11:42.375987 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 1fa293b622ec 4 hours ago 1.12GB 2025-06-01 05:11:42.376030 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 f31a56430677 4 hours ago 1.1GB 2025-06-01 05:11:42.376049 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 8c28f1975edc 4 hours ago 1.12GB 2025-06-01 05:11:42.376068 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 db4a7764403e 4 hours ago 1.05GB 2025-06-01 05:11:42.376086 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 8afa9b7984b9 4 hours ago 1.05GB 2025-06-01 05:11:42.376105 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a798bf08fe30 4 hours ago 1.05GB 2025-06-01 05:11:42.376124 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f73039e3046a 4 hours ago 1.05GB 2025-06-01 05:11:42.376142 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 29579b505c5c 4 hours ago 1.06GB 2025-06-01 05:11:42.376160 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 5aa1c7c51ff9 4 hours ago 1.06GB 2025-06-01 05:11:42.376179 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 175ea7a5f5f6 4 hours ago 1.15GB 2025-06-01 05:11:42.376197 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 a29ce89ab8f4 4 hours ago 1.06GB 2025-06-01 05:11:42.376215 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 1be482c719ca 4 hours ago 1.06GB 2025-06-01 05:11:42.376255 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 2aaea623abef 4 hours ago 1.06GB 2025-06-01 05:11:42.376285 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1790864e17ae 4 hours ago 1.2GB 2025-06-01 05:11:42.376305 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 ff61ed1ae5ad 4 hours ago 1.31GB 2025-06-01 05:11:42.376323 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 729cb5ecf679 4 hours ago 947MB 2025-06-01 05:11:42.376341 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 307a17bafcee 4 hours ago 947MB 2025-06-01 05:11:42.376369 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 625c3d7aade0 4 hours ago 948MB 2025-06-01 05:11:42.376398 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b2e3c9e55b3f 4 hours ago 948MB 2025-06-01 05:11:42.645548 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-01 05:11:42.656245 | orchestrator | + set -e 2025-06-01 05:11:42.656327 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 05:11:42.657278 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 05:11:42.657297 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 05:11:42.657306 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 05:11:42.657315 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 05:11:42.657324 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 05:11:42.657335 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 05:11:42.657344 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 05:11:42.657353 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 05:11:42.657361 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 05:11:42.657370 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 05:11:42.657379 | orchestrator | ++ export ARA=false 2025-06-01 05:11:42.657388 | orchestrator | ++ ARA=false 2025-06-01 05:11:42.657396 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 05:11:42.657405 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 05:11:42.657418 | orchestrator | ++ export TEMPEST=true 2025-06-01 05:11:42.657427 | orchestrator | ++ TEMPEST=true 2025-06-01 05:11:42.657459 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 05:11:42.657468 | orchestrator | ++ IS_ZUUL=true 2025-06-01 05:11:42.657476 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 05:11:42.657485 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 05:11:42.657494 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 05:11:42.657502 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 05:11:42.657511 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 05:11:42.657519 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 05:11:42.657527 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 05:11:42.657536 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 05:11:42.657544 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 05:11:42.657557 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 05:11:42.657742 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 05:11:42.657827 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-01 05:11:42.667228 | orchestrator | + set -e 2025-06-01 05:11:42.667295 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 05:11:42.667311 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 05:11:42.667326 | orchestrator | ++ INTERACTIVE=false 2025-06-01 05:11:42.667340 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 05:11:42.667353 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 05:11:42.667367 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-01 05:11:42.668439 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-01 05:11:42.675092 | orchestrator | 2025-06-01 05:11:42.675128 | orchestrator | # Ceph status 2025-06-01 05:11:42.675138 | orchestrator | 2025-06-01 05:11:42.675147 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 05:11:42.675157 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 05:11:42.675166 | orchestrator | + echo 2025-06-01 05:11:42.675175 | orchestrator | + echo '# Ceph status' 2025-06-01 05:11:42.675184 | orchestrator | + echo 2025-06-01 05:11:42.675192 | orchestrator | + ceph -s 2025-06-01 05:11:43.248237 | orchestrator | cluster: 2025-06-01 05:11:43.248354 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-01 05:11:43.248371 | orchestrator | health: HEALTH_OK 2025-06-01 05:11:43.248383 | orchestrator | 2025-06-01 05:11:43.248395 | orchestrator | services: 2025-06-01 05:11:43.248407 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2025-06-01 05:11:43.248419 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2025-06-01 05:11:43.248431 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-01 05:11:43.248443 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2025-06-01 05:11:43.248454 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-01 05:11:43.248464 | orchestrator | 2025-06-01 05:11:43.248475 | orchestrator | data: 2025-06-01 05:11:43.248486 | orchestrator | volumes: 1/1 healthy 2025-06-01 05:11:43.248496 | orchestrator | pools: 14 pools, 401 pgs 2025-06-01 05:11:43.248507 | orchestrator | objects: 555 objects, 2.2 GiB 2025-06-01 05:11:43.248518 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-01 05:11:43.248529 | orchestrator | pgs: 401 active+clean 2025-06-01 05:11:43.248539 | orchestrator | 2025-06-01 05:11:43.248550 | orchestrator | io: 2025-06-01 05:11:43.248561 | orchestrator | client: 27 KiB/s rd, 0 B/s wr, 26 op/s rd, 17 op/s wr 2025-06-01 05:11:43.248572 | orchestrator | 2025-06-01 05:11:43.302203 | orchestrator | 2025-06-01 05:11:43.302327 | orchestrator | # Ceph versions 2025-06-01 05:11:43.302353 | orchestrator | 2025-06-01 05:11:43.302372 | orchestrator | + echo 2025-06-01 05:11:43.302390 | orchestrator | + echo '# Ceph versions' 2025-06-01 05:11:43.302403 | orchestrator | + echo 2025-06-01 05:11:43.302414 | orchestrator | + ceph versions 2025-06-01 05:11:43.873330 | orchestrator | { 2025-06-01 05:11:43.873437 | orchestrator | "mon": { 2025-06-01 05:11:43.873452 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 05:11:43.873465 | orchestrator | }, 2025-06-01 05:11:43.873477 | orchestrator | "mgr": { 2025-06-01 05:11:43.873498 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 05:11:43.873518 | orchestrator | }, 2025-06-01 05:11:43.873537 | orchestrator | "osd": { 2025-06-01 05:11:43.873555 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-01 05:11:43.873574 | orchestrator | }, 2025-06-01 05:11:43.873593 | orchestrator | "mds": { 2025-06-01 05:11:43.873613 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 05:11:43.873675 | orchestrator | }, 2025-06-01 05:11:43.873687 | orchestrator | "rgw": { 2025-06-01 05:11:43.873699 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 05:11:43.873709 | orchestrator | }, 2025-06-01 05:11:43.873720 | orchestrator | "overall": { 2025-06-01 05:11:43.873732 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-01 05:11:43.873743 | orchestrator | } 2025-06-01 05:11:43.873817 | orchestrator | } 2025-06-01 05:11:43.919519 | orchestrator | 2025-06-01 05:11:43.919627 | orchestrator | # Ceph OSD tree 2025-06-01 05:11:43.919648 | orchestrator | 2025-06-01 05:11:43.919671 | orchestrator | + echo 2025-06-01 05:11:43.919688 | orchestrator | + echo '# Ceph OSD tree' 2025-06-01 05:11:43.919705 | orchestrator | + echo 2025-06-01 05:11:43.919720 | orchestrator | + ceph osd df tree 2025-06-01 05:11:44.472465 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-01 05:11:44.472583 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-01 05:11:44.472598 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-01 05:11:44.472609 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.65 1.29 191 up osd.0 2025-06-01 05:11:44.472621 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 856 MiB 787 MiB 1 KiB 70 MiB 19 GiB 4.19 0.71 197 up osd.5 2025-06-01 05:11:44.472632 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-01 05:11:44.472642 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 925 MiB 851 MiB 1 KiB 74 MiB 19 GiB 4.52 0.76 176 up osd.1 2025-06-01 05:11:44.472653 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.31 1.24 216 up osd.3 2025-06-01 05:11:44.472664 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-01 05:11:44.472674 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.43 1.26 195 up osd.2 2025-06-01 05:11:44.472685 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 900 MiB 827 MiB 1 KiB 74 MiB 19 GiB 4.40 0.74 195 up osd.4 2025-06-01 05:11:44.472696 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-01 05:11:44.472707 | orchestrator | MIN/MAX VAR: 0.71/1.29 STDDEV: 1.55 2025-06-01 05:11:44.524787 | orchestrator | 2025-06-01 05:11:44.524879 | orchestrator | # Ceph monitor status 2025-06-01 05:11:44.524893 | orchestrator | 2025-06-01 05:11:44.524904 | orchestrator | + echo 2025-06-01 05:11:44.524915 | orchestrator | + echo '# Ceph monitor status' 2025-06-01 05:11:44.524924 | orchestrator | + echo 2025-06-01 05:11:44.524934 | orchestrator | + ceph mon stat 2025-06-01 05:11:45.130823 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-01 05:11:45.172247 | orchestrator | 2025-06-01 05:11:45.172333 | orchestrator | # Ceph quorum status 2025-06-01 05:11:45.172344 | orchestrator | 2025-06-01 05:11:45.172352 | orchestrator | + echo 2025-06-01 05:11:45.172360 | orchestrator | + echo '# Ceph quorum status' 2025-06-01 05:11:45.172369 | orchestrator | + echo 2025-06-01 05:11:45.172638 | orchestrator | + ceph quorum_status 2025-06-01 05:11:45.172648 | orchestrator | + jq 2025-06-01 05:11:45.797927 | orchestrator | { 2025-06-01 05:11:45.798086 | orchestrator | "election_epoch": 6, 2025-06-01 05:11:45.798105 | orchestrator | "quorum": [ 2025-06-01 05:11:45.798117 | orchestrator | 0, 2025-06-01 05:11:45.798128 | orchestrator | 1, 2025-06-01 05:11:45.798139 | orchestrator | 2 2025-06-01 05:11:45.798150 | orchestrator | ], 2025-06-01 05:11:45.798177 | orchestrator | "quorum_names": [ 2025-06-01 05:11:45.798232 | orchestrator | "testbed-node-0", 2025-06-01 05:11:45.798244 | orchestrator | "testbed-node-1", 2025-06-01 05:11:45.798255 | orchestrator | "testbed-node-2" 2025-06-01 05:11:45.798266 | orchestrator | ], 2025-06-01 05:11:45.798277 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-01 05:11:45.798289 | orchestrator | "quorum_age": 1600, 2025-06-01 05:11:45.798300 | orchestrator | "features": { 2025-06-01 05:11:45.798311 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-01 05:11:45.798322 | orchestrator | "quorum_mon": [ 2025-06-01 05:11:45.798332 | orchestrator | "kraken", 2025-06-01 05:11:45.798342 | orchestrator | "luminous", 2025-06-01 05:11:45.798353 | orchestrator | "mimic", 2025-06-01 05:11:45.798364 | orchestrator | "osdmap-prune", 2025-06-01 05:11:45.798375 | orchestrator | "nautilus", 2025-06-01 05:11:45.798387 | orchestrator | "octopus", 2025-06-01 05:11:45.798399 | orchestrator | "pacific", 2025-06-01 05:11:45.798412 | orchestrator | "elector-pinging", 2025-06-01 05:11:45.798425 | orchestrator | "quincy", 2025-06-01 05:11:45.798437 | orchestrator | "reef" 2025-06-01 05:11:45.798450 | orchestrator | ] 2025-06-01 05:11:45.798463 | orchestrator | }, 2025-06-01 05:11:45.798475 | orchestrator | "monmap": { 2025-06-01 05:11:45.798488 | orchestrator | "epoch": 1, 2025-06-01 05:11:45.798501 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-01 05:11:45.798514 | orchestrator | "modified": "2025-06-01T04:44:42.874866Z", 2025-06-01 05:11:45.798526 | orchestrator | "created": "2025-06-01T04:44:42.874866Z", 2025-06-01 05:11:45.798539 | orchestrator | "min_mon_release": 18, 2025-06-01 05:11:45.798551 | orchestrator | "min_mon_release_name": "reef", 2025-06-01 05:11:45.798563 | orchestrator | "election_strategy": 1, 2025-06-01 05:11:45.798576 | orchestrator | "disallowed_leaders: ": "", 2025-06-01 05:11:45.798589 | orchestrator | "stretch_mode": false, 2025-06-01 05:11:45.798601 | orchestrator | "tiebreaker_mon": "", 2025-06-01 05:11:45.798614 | orchestrator | "removed_ranks: ": "", 2025-06-01 05:11:45.798626 | orchestrator | "features": { 2025-06-01 05:11:45.798638 | orchestrator | "persistent": [ 2025-06-01 05:11:45.798651 | orchestrator | "kraken", 2025-06-01 05:11:45.798663 | orchestrator | "luminous", 2025-06-01 05:11:45.798675 | orchestrator | "mimic", 2025-06-01 05:11:45.798688 | orchestrator | "osdmap-prune", 2025-06-01 05:11:45.798700 | orchestrator | "nautilus", 2025-06-01 05:11:45.798713 | orchestrator | "octopus", 2025-06-01 05:11:45.798726 | orchestrator | "pacific", 2025-06-01 05:11:45.798738 | orchestrator | "elector-pinging", 2025-06-01 05:11:45.798782 | orchestrator | "quincy", 2025-06-01 05:11:45.798794 | orchestrator | "reef" 2025-06-01 05:11:45.798806 | orchestrator | ], 2025-06-01 05:11:45.798817 | orchestrator | "optional": [] 2025-06-01 05:11:45.798828 | orchestrator | }, 2025-06-01 05:11:45.798839 | orchestrator | "mons": [ 2025-06-01 05:11:45.798850 | orchestrator | { 2025-06-01 05:11:45.798860 | orchestrator | "rank": 0, 2025-06-01 05:11:45.798871 | orchestrator | "name": "testbed-node-0", 2025-06-01 05:11:45.798882 | orchestrator | "public_addrs": { 2025-06-01 05:11:45.798893 | orchestrator | "addrvec": [ 2025-06-01 05:11:45.798903 | orchestrator | { 2025-06-01 05:11:45.798914 | orchestrator | "type": "v2", 2025-06-01 05:11:45.798925 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-01 05:11:45.798935 | orchestrator | "nonce": 0 2025-06-01 05:11:45.798946 | orchestrator | }, 2025-06-01 05:11:45.798957 | orchestrator | { 2025-06-01 05:11:45.798968 | orchestrator | "type": "v1", 2025-06-01 05:11:45.798979 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-01 05:11:45.798989 | orchestrator | "nonce": 0 2025-06-01 05:11:45.799000 | orchestrator | } 2025-06-01 05:11:45.799011 | orchestrator | ] 2025-06-01 05:11:45.799021 | orchestrator | }, 2025-06-01 05:11:45.799032 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-01 05:11:45.799043 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-01 05:11:45.799054 | orchestrator | "priority": 0, 2025-06-01 05:11:45.799064 | orchestrator | "weight": 0, 2025-06-01 05:11:45.799075 | orchestrator | "crush_location": "{}" 2025-06-01 05:11:45.799086 | orchestrator | }, 2025-06-01 05:11:45.799096 | orchestrator | { 2025-06-01 05:11:45.799107 | orchestrator | "rank": 1, 2025-06-01 05:11:45.799118 | orchestrator | "name": "testbed-node-1", 2025-06-01 05:11:45.799128 | orchestrator | "public_addrs": { 2025-06-01 05:11:45.799139 | orchestrator | "addrvec": [ 2025-06-01 05:11:45.799150 | orchestrator | { 2025-06-01 05:11:45.799161 | orchestrator | "type": "v2", 2025-06-01 05:11:45.799171 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-01 05:11:45.799190 | orchestrator | "nonce": 0 2025-06-01 05:11:45.799201 | orchestrator | }, 2025-06-01 05:11:45.799212 | orchestrator | { 2025-06-01 05:11:45.799222 | orchestrator | "type": "v1", 2025-06-01 05:11:45.799233 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-01 05:11:45.799244 | orchestrator | "nonce": 0 2025-06-01 05:11:45.799271 | orchestrator | } 2025-06-01 05:11:45.799283 | orchestrator | ] 2025-06-01 05:11:45.799294 | orchestrator | }, 2025-06-01 05:11:45.799304 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-01 05:11:45.799315 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-01 05:11:45.799326 | orchestrator | "priority": 0, 2025-06-01 05:11:45.799337 | orchestrator | "weight": 0, 2025-06-01 05:11:45.799347 | orchestrator | "crush_location": "{}" 2025-06-01 05:11:45.799358 | orchestrator | }, 2025-06-01 05:11:45.799369 | orchestrator | { 2025-06-01 05:11:45.799380 | orchestrator | "rank": 2, 2025-06-01 05:11:45.799391 | orchestrator | "name": "testbed-node-2", 2025-06-01 05:11:45.799401 | orchestrator | "public_addrs": { 2025-06-01 05:11:45.799413 | orchestrator | "addrvec": [ 2025-06-01 05:11:45.799423 | orchestrator | { 2025-06-01 05:11:45.799434 | orchestrator | "type": "v2", 2025-06-01 05:11:45.799445 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-01 05:11:45.799455 | orchestrator | "nonce": 0 2025-06-01 05:11:45.799466 | orchestrator | }, 2025-06-01 05:11:45.799477 | orchestrator | { 2025-06-01 05:11:45.799488 | orchestrator | "type": "v1", 2025-06-01 05:11:45.799498 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-01 05:11:45.799509 | orchestrator | "nonce": 0 2025-06-01 05:11:45.799520 | orchestrator | } 2025-06-01 05:11:45.799530 | orchestrator | ] 2025-06-01 05:11:45.799541 | orchestrator | }, 2025-06-01 05:11:45.799552 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-01 05:11:45.799563 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-01 05:11:45.799573 | orchestrator | "priority": 0, 2025-06-01 05:11:45.799584 | orchestrator | "weight": 0, 2025-06-01 05:11:45.799616 | orchestrator | "crush_location": "{}" 2025-06-01 05:11:45.799627 | orchestrator | } 2025-06-01 05:11:45.799638 | orchestrator | ] 2025-06-01 05:11:45.799649 | orchestrator | } 2025-06-01 05:11:45.799660 | orchestrator | } 2025-06-01 05:11:45.799684 | orchestrator | 2025-06-01 05:11:45.799697 | orchestrator | # Ceph free space status 2025-06-01 05:11:45.799707 | orchestrator | 2025-06-01 05:11:45.799718 | orchestrator | + echo 2025-06-01 05:11:45.799729 | orchestrator | + echo '# Ceph free space status' 2025-06-01 05:11:45.799740 | orchestrator | + echo 2025-06-01 05:11:45.799777 | orchestrator | + ceph df 2025-06-01 05:11:46.391100 | orchestrator | --- RAW STORAGE --- 2025-06-01 05:11:46.391239 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-01 05:11:46.391281 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-01 05:11:46.391301 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-01 05:11:46.391321 | orchestrator | 2025-06-01 05:11:46.391341 | orchestrator | --- POOLS --- 2025-06-01 05:11:46.391379 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-01 05:11:46.391401 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-06-01 05:11:46.391420 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-01 05:11:46.391439 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-01 05:11:46.391458 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-01 05:11:46.391478 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-01 05:11:46.391497 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-01 05:11:46.391516 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2025-06-01 05:11:46.391535 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-01 05:11:46.391555 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2025-06-01 05:11:46.391575 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 05:11:46.391595 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 05:11:46.391648 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2025-06-01 05:11:46.391671 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 05:11:46.391693 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 05:11:46.439297 | orchestrator | ++ semver latest 5.0.0 2025-06-01 05:11:46.490472 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-01 05:11:46.490567 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 05:11:46.490583 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-01 05:11:46.490595 | orchestrator | + osism apply facts 2025-06-01 05:11:48.275526 | orchestrator | Registering Redlock._acquired_script 2025-06-01 05:11:48.275677 | orchestrator | Registering Redlock._extend_script 2025-06-01 05:11:48.275706 | orchestrator | Registering Redlock._release_script 2025-06-01 05:11:48.338597 | orchestrator | 2025-06-01 05:11:48 | INFO  | Task 0783d994-10b6-4441-b627-fcbba07e524c (facts) was prepared for execution. 2025-06-01 05:11:48.338687 | orchestrator | 2025-06-01 05:11:48 | INFO  | It takes a moment until task 0783d994-10b6-4441-b627-fcbba07e524c (facts) has been started and output is visible here. 2025-06-01 05:11:52.401822 | orchestrator | 2025-06-01 05:11:52.402595 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 05:11:52.405101 | orchestrator | 2025-06-01 05:11:52.405153 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 05:11:52.405175 | orchestrator | Sunday 01 June 2025 05:11:52 +0000 (0:00:00.214) 0:00:00.214 *********** 2025-06-01 05:11:53.422450 | orchestrator | ok: [testbed-manager] 2025-06-01 05:11:53.422573 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:11:53.422588 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:11:53.422600 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:11:53.423575 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:11:53.424403 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:11:53.425211 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:11:53.425884 | orchestrator | 2025-06-01 05:11:53.426816 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 05:11:53.427565 | orchestrator | Sunday 01 June 2025 05:11:53 +0000 (0:00:01.016) 0:00:01.230 *********** 2025-06-01 05:11:53.601585 | orchestrator | skipping: [testbed-manager] 2025-06-01 05:11:53.708964 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:11:53.794082 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:11:53.879080 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:11:53.960531 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:11:54.684517 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:11:54.687541 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:11:54.687699 | orchestrator | 2025-06-01 05:11:54.687733 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 05:11:54.687886 | orchestrator | 2025-06-01 05:11:54.689514 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 05:11:54.689599 | orchestrator | Sunday 01 June 2025 05:11:54 +0000 (0:00:01.265) 0:00:02.496 *********** 2025-06-01 05:12:00.067795 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:00.069543 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:00.072199 | orchestrator | ok: [testbed-manager] 2025-06-01 05:12:00.073838 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:00.075952 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:12:00.077077 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:12:00.078451 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:12:00.079172 | orchestrator | 2025-06-01 05:12:00.080546 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 05:12:00.081569 | orchestrator | 2025-06-01 05:12:00.082641 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 05:12:00.083287 | orchestrator | Sunday 01 June 2025 05:12:00 +0000 (0:00:05.385) 0:00:07.882 *********** 2025-06-01 05:12:00.246849 | orchestrator | skipping: [testbed-manager] 2025-06-01 05:12:00.337456 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:00.418855 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:12:00.507732 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:12:00.588074 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:12:00.639006 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:12:00.640105 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:12:00.640471 | orchestrator | 2025-06-01 05:12:00.641421 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:12:00.642820 | orchestrator | 2025-06-01 05:12:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 05:12:00.642838 | orchestrator | 2025-06-01 05:12:00 | INFO  | Please wait and do not abort execution. 2025-06-01 05:12:00.643520 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.644502 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.645244 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.646081 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.646563 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.647379 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.647980 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:00.648581 | orchestrator | 2025-06-01 05:12:00.649395 | orchestrator | 2025-06-01 05:12:00.650014 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:12:00.650904 | orchestrator | Sunday 01 June 2025 05:12:00 +0000 (0:00:00.572) 0:00:08.454 *********** 2025-06-01 05:12:00.652205 | orchestrator | =============================================================================== 2025-06-01 05:12:00.652945 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.39s 2025-06-01 05:12:00.653992 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-06-01 05:12:00.654381 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2025-06-01 05:12:00.655208 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-06-01 05:12:01.384554 | orchestrator | + osism validate ceph-mons 2025-06-01 05:12:03.139479 | orchestrator | Registering Redlock._acquired_script 2025-06-01 05:12:03.139579 | orchestrator | Registering Redlock._extend_script 2025-06-01 05:12:03.139594 | orchestrator | Registering Redlock._release_script 2025-06-01 05:12:23.134471 | orchestrator | 2025-06-01 05:12:23.134605 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-01 05:12:23.134624 | orchestrator | 2025-06-01 05:12:23.134636 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-01 05:12:23.134650 | orchestrator | Sunday 01 June 2025 05:12:07 +0000 (0:00:00.454) 0:00:00.454 *********** 2025-06-01 05:12:23.134670 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:23.134682 | orchestrator | 2025-06-01 05:12:23.134694 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-01 05:12:23.134706 | orchestrator | Sunday 01 June 2025 05:12:08 +0000 (0:00:00.660) 0:00:01.114 *********** 2025-06-01 05:12:23.134717 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:23.134728 | orchestrator | 2025-06-01 05:12:23.134793 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-01 05:12:23.134805 | orchestrator | Sunday 01 June 2025 05:12:09 +0000 (0:00:00.841) 0:00:01.956 *********** 2025-06-01 05:12:23.134869 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.134882 | orchestrator | 2025-06-01 05:12:23.134893 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-01 05:12:23.134904 | orchestrator | Sunday 01 June 2025 05:12:09 +0000 (0:00:00.287) 0:00:02.244 *********** 2025-06-01 05:12:23.134915 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.134926 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:23.134937 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:23.134947 | orchestrator | 2025-06-01 05:12:23.134958 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-01 05:12:23.134969 | orchestrator | Sunday 01 June 2025 05:12:09 +0000 (0:00:00.322) 0:00:02.567 *********** 2025-06-01 05:12:23.134980 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.134990 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:23.135001 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:23.135015 | orchestrator | 2025-06-01 05:12:23.135027 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-01 05:12:23.135040 | orchestrator | Sunday 01 June 2025 05:12:10 +0000 (0:00:01.008) 0:00:03.575 *********** 2025-06-01 05:12:23.135053 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135066 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:12:23.135079 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:12:23.135091 | orchestrator | 2025-06-01 05:12:23.135103 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-01 05:12:23.135116 | orchestrator | Sunday 01 June 2025 05:12:11 +0000 (0:00:00.321) 0:00:03.896 *********** 2025-06-01 05:12:23.135128 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.135141 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:23.135153 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:23.135166 | orchestrator | 2025-06-01 05:12:23.135179 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:12:23.135192 | orchestrator | Sunday 01 June 2025 05:12:11 +0000 (0:00:00.589) 0:00:04.486 *********** 2025-06-01 05:12:23.135221 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.135240 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:23.135259 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:23.135276 | orchestrator | 2025-06-01 05:12:23.135293 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-01 05:12:23.135310 | orchestrator | Sunday 01 June 2025 05:12:11 +0000 (0:00:00.316) 0:00:04.802 *********** 2025-06-01 05:12:23.135327 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135346 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:12:23.135364 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:12:23.135382 | orchestrator | 2025-06-01 05:12:23.135401 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-01 05:12:23.135421 | orchestrator | Sunday 01 June 2025 05:12:12 +0000 (0:00:00.291) 0:00:05.094 *********** 2025-06-01 05:12:23.135440 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.135467 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:23.135484 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:23.135502 | orchestrator | 2025-06-01 05:12:23.135521 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 05:12:23.135540 | orchestrator | Sunday 01 June 2025 05:12:12 +0000 (0:00:00.307) 0:00:05.401 *********** 2025-06-01 05:12:23.135559 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135574 | orchestrator | 2025-06-01 05:12:23.135585 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 05:12:23.135596 | orchestrator | Sunday 01 June 2025 05:12:13 +0000 (0:00:00.727) 0:00:06.129 *********** 2025-06-01 05:12:23.135607 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135618 | orchestrator | 2025-06-01 05:12:23.135629 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 05:12:23.135639 | orchestrator | Sunday 01 June 2025 05:12:13 +0000 (0:00:00.232) 0:00:06.361 *********** 2025-06-01 05:12:23.135662 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135674 | orchestrator | 2025-06-01 05:12:23.135686 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:23.135696 | orchestrator | Sunday 01 June 2025 05:12:13 +0000 (0:00:00.245) 0:00:06.607 *********** 2025-06-01 05:12:23.135707 | orchestrator | 2025-06-01 05:12:23.135718 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:23.135729 | orchestrator | Sunday 01 June 2025 05:12:13 +0000 (0:00:00.077) 0:00:06.684 *********** 2025-06-01 05:12:23.135791 | orchestrator | 2025-06-01 05:12:23.135803 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:23.135814 | orchestrator | Sunday 01 June 2025 05:12:13 +0000 (0:00:00.089) 0:00:06.774 *********** 2025-06-01 05:12:23.135825 | orchestrator | 2025-06-01 05:12:23.135836 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 05:12:23.135846 | orchestrator | Sunday 01 June 2025 05:12:14 +0000 (0:00:00.081) 0:00:06.856 *********** 2025-06-01 05:12:23.135857 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135868 | orchestrator | 2025-06-01 05:12:23.135879 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-01 05:12:23.135890 | orchestrator | Sunday 01 June 2025 05:12:14 +0000 (0:00:00.246) 0:00:07.102 *********** 2025-06-01 05:12:23.135901 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.135912 | orchestrator | 2025-06-01 05:12:23.135945 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-01 05:12:23.135957 | orchestrator | Sunday 01 June 2025 05:12:14 +0000 (0:00:00.247) 0:00:07.349 *********** 2025-06-01 05:12:23.135968 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.135979 | orchestrator | 2025-06-01 05:12:23.135990 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-01 05:12:23.136001 | orchestrator | Sunday 01 June 2025 05:12:14 +0000 (0:00:00.115) 0:00:07.464 *********** 2025-06-01 05:12:23.136011 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:12:23.136023 | orchestrator | 2025-06-01 05:12:23.136034 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-01 05:12:23.136045 | orchestrator | Sunday 01 June 2025 05:12:16 +0000 (0:00:01.468) 0:00:08.933 *********** 2025-06-01 05:12:23.136056 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136067 | orchestrator | 2025-06-01 05:12:23.136078 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-01 05:12:23.136089 | orchestrator | Sunday 01 June 2025 05:12:16 +0000 (0:00:00.320) 0:00:09.253 *********** 2025-06-01 05:12:23.136100 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.136111 | orchestrator | 2025-06-01 05:12:23.136122 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-01 05:12:23.136133 | orchestrator | Sunday 01 June 2025 05:12:16 +0000 (0:00:00.335) 0:00:09.589 *********** 2025-06-01 05:12:23.136143 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136155 | orchestrator | 2025-06-01 05:12:23.136166 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-01 05:12:23.136177 | orchestrator | Sunday 01 June 2025 05:12:17 +0000 (0:00:00.345) 0:00:09.935 *********** 2025-06-01 05:12:23.136188 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136199 | orchestrator | 2025-06-01 05:12:23.136210 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-01 05:12:23.136221 | orchestrator | Sunday 01 June 2025 05:12:17 +0000 (0:00:00.303) 0:00:10.239 *********** 2025-06-01 05:12:23.136232 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.136243 | orchestrator | 2025-06-01 05:12:23.136254 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-01 05:12:23.136265 | orchestrator | Sunday 01 June 2025 05:12:17 +0000 (0:00:00.125) 0:00:10.364 *********** 2025-06-01 05:12:23.136276 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136287 | orchestrator | 2025-06-01 05:12:23.136298 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-01 05:12:23.136316 | orchestrator | Sunday 01 June 2025 05:12:17 +0000 (0:00:00.128) 0:00:10.493 *********** 2025-06-01 05:12:23.136327 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136338 | orchestrator | 2025-06-01 05:12:23.136349 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-01 05:12:23.136360 | orchestrator | Sunday 01 June 2025 05:12:17 +0000 (0:00:00.123) 0:00:10.616 *********** 2025-06-01 05:12:23.136371 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:12:23.136382 | orchestrator | 2025-06-01 05:12:23.136393 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-01 05:12:23.136404 | orchestrator | Sunday 01 June 2025 05:12:19 +0000 (0:00:01.259) 0:00:11.876 *********** 2025-06-01 05:12:23.136415 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136426 | orchestrator | 2025-06-01 05:12:23.136437 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-01 05:12:23.136447 | orchestrator | Sunday 01 June 2025 05:12:19 +0000 (0:00:00.337) 0:00:12.213 *********** 2025-06-01 05:12:23.136458 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.136469 | orchestrator | 2025-06-01 05:12:23.136480 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-01 05:12:23.136491 | orchestrator | Sunday 01 June 2025 05:12:19 +0000 (0:00:00.139) 0:00:12.353 *********** 2025-06-01 05:12:23.136507 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:23.136519 | orchestrator | 2025-06-01 05:12:23.136530 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-01 05:12:23.136541 | orchestrator | Sunday 01 June 2025 05:12:19 +0000 (0:00:00.160) 0:00:12.513 *********** 2025-06-01 05:12:23.136552 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.136571 | orchestrator | 2025-06-01 05:12:23.136587 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-01 05:12:23.136605 | orchestrator | Sunday 01 June 2025 05:12:19 +0000 (0:00:00.139) 0:00:12.653 *********** 2025-06-01 05:12:23.136621 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.136637 | orchestrator | 2025-06-01 05:12:23.136653 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-01 05:12:23.136669 | orchestrator | Sunday 01 June 2025 05:12:20 +0000 (0:00:00.395) 0:00:13.049 *********** 2025-06-01 05:12:23.136686 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:23.136706 | orchestrator | 2025-06-01 05:12:23.136726 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-01 05:12:23.136773 | orchestrator | Sunday 01 June 2025 05:12:20 +0000 (0:00:00.313) 0:00:13.363 *********** 2025-06-01 05:12:23.136786 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:23.136797 | orchestrator | 2025-06-01 05:12:23.136808 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 05:12:23.136819 | orchestrator | Sunday 01 June 2025 05:12:20 +0000 (0:00:00.245) 0:00:13.609 *********** 2025-06-01 05:12:23.136830 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:23.136846 | orchestrator | 2025-06-01 05:12:23.136858 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 05:12:23.136869 | orchestrator | Sunday 01 June 2025 05:12:22 +0000 (0:00:01.605) 0:00:15.214 *********** 2025-06-01 05:12:23.136880 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:23.136891 | orchestrator | 2025-06-01 05:12:23.136901 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 05:12:23.136912 | orchestrator | Sunday 01 June 2025 05:12:22 +0000 (0:00:00.244) 0:00:15.458 *********** 2025-06-01 05:12:23.136923 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:23.136934 | orchestrator | 2025-06-01 05:12:23.136954 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:25.716797 | orchestrator | Sunday 01 June 2025 05:12:22 +0000 (0:00:00.248) 0:00:15.707 *********** 2025-06-01 05:12:25.716905 | orchestrator | 2025-06-01 05:12:25.716920 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:25.716958 | orchestrator | Sunday 01 June 2025 05:12:22 +0000 (0:00:00.069) 0:00:15.777 *********** 2025-06-01 05:12:25.716969 | orchestrator | 2025-06-01 05:12:25.716981 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:25.716992 | orchestrator | Sunday 01 June 2025 05:12:23 +0000 (0:00:00.094) 0:00:15.872 *********** 2025-06-01 05:12:25.717003 | orchestrator | 2025-06-01 05:12:25.717014 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-01 05:12:25.717024 | orchestrator | Sunday 01 June 2025 05:12:23 +0000 (0:00:00.072) 0:00:15.944 *********** 2025-06-01 05:12:25.717036 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:25.717046 | orchestrator | 2025-06-01 05:12:25.717057 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 05:12:25.717068 | orchestrator | Sunday 01 June 2025 05:12:24 +0000 (0:00:01.612) 0:00:17.557 *********** 2025-06-01 05:12:25.717079 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-01 05:12:25.717090 | orchestrator |  "msg": [ 2025-06-01 05:12:25.717102 | orchestrator |  "Validator run completed.", 2025-06-01 05:12:25.717113 | orchestrator |  "You can find the report file here:", 2025-06-01 05:12:25.717124 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-01T05:12:08+00:00-report.json", 2025-06-01 05:12:25.717136 | orchestrator |  "on the following host:", 2025-06-01 05:12:25.717147 | orchestrator |  "testbed-manager" 2025-06-01 05:12:25.717158 | orchestrator |  ] 2025-06-01 05:12:25.717169 | orchestrator | } 2025-06-01 05:12:25.717179 | orchestrator | 2025-06-01 05:12:25.717190 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:12:25.717202 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-01 05:12:25.717215 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:25.717226 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:25.717237 | orchestrator | 2025-06-01 05:12:25.717248 | orchestrator | 2025-06-01 05:12:25.717259 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:12:25.717269 | orchestrator | Sunday 01 June 2025 05:12:25 +0000 (0:00:00.616) 0:00:18.174 *********** 2025-06-01 05:12:25.717280 | orchestrator | =============================================================================== 2025-06-01 05:12:25.717291 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2025-06-01 05:12:25.717302 | orchestrator | Aggregate test results step one ----------------------------------------- 1.61s 2025-06-01 05:12:25.717312 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.47s 2025-06-01 05:12:25.717323 | orchestrator | Gather status data ------------------------------------------------------ 1.26s 2025-06-01 05:12:25.717334 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-06-01 05:12:25.717345 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2025-06-01 05:12:25.717356 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2025-06-01 05:12:25.717367 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-06-01 05:12:25.717377 | orchestrator | Print report file information ------------------------------------------- 0.62s 2025-06-01 05:12:25.717388 | orchestrator | Set test result to passed if container is existing ---------------------- 0.59s 2025-06-01 05:12:25.717399 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.40s 2025-06-01 05:12:25.717410 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2025-06-01 05:12:25.717420 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2025-06-01 05:12:25.717438 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.34s 2025-06-01 05:12:25.717448 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-06-01 05:12:25.717459 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2025-06-01 05:12:25.717470 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-06-01 05:12:25.717480 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-01 05:12:25.717491 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.31s 2025-06-01 05:12:25.717519 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-06-01 05:12:25.979340 | orchestrator | + osism validate ceph-mgrs 2025-06-01 05:12:27.838110 | orchestrator | Registering Redlock._acquired_script 2025-06-01 05:12:27.838210 | orchestrator | Registering Redlock._extend_script 2025-06-01 05:12:27.838225 | orchestrator | Registering Redlock._release_script 2025-06-01 05:12:47.558394 | orchestrator | 2025-06-01 05:12:47.558506 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-01 05:12:47.558522 | orchestrator | 2025-06-01 05:12:47.558533 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-01 05:12:47.558545 | orchestrator | Sunday 01 June 2025 05:12:32 +0000 (0:00:00.434) 0:00:00.434 *********** 2025-06-01 05:12:47.558557 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.558568 | orchestrator | 2025-06-01 05:12:47.558578 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-01 05:12:47.558589 | orchestrator | Sunday 01 June 2025 05:12:32 +0000 (0:00:00.641) 0:00:01.076 *********** 2025-06-01 05:12:47.558600 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.558611 | orchestrator | 2025-06-01 05:12:47.558622 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-01 05:12:47.558634 | orchestrator | Sunday 01 June 2025 05:12:33 +0000 (0:00:00.913) 0:00:01.990 *********** 2025-06-01 05:12:47.558645 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.558657 | orchestrator | 2025-06-01 05:12:47.558668 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-01 05:12:47.558678 | orchestrator | Sunday 01 June 2025 05:12:34 +0000 (0:00:00.252) 0:00:02.243 *********** 2025-06-01 05:12:47.558689 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.558700 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:47.558710 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:47.558751 | orchestrator | 2025-06-01 05:12:47.558765 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-01 05:12:47.558776 | orchestrator | Sunday 01 June 2025 05:12:34 +0000 (0:00:00.306) 0:00:02.549 *********** 2025-06-01 05:12:47.558786 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.558797 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:47.558808 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:47.558818 | orchestrator | 2025-06-01 05:12:47.558829 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-01 05:12:47.558839 | orchestrator | Sunday 01 June 2025 05:12:35 +0000 (0:00:00.940) 0:00:03.490 *********** 2025-06-01 05:12:47.558850 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.558861 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:12:47.558872 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:12:47.558882 | orchestrator | 2025-06-01 05:12:47.558893 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-01 05:12:47.558904 | orchestrator | Sunday 01 June 2025 05:12:35 +0000 (0:00:00.284) 0:00:03.775 *********** 2025-06-01 05:12:47.558914 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.558925 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:47.558937 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:47.558950 | orchestrator | 2025-06-01 05:12:47.558962 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:12:47.558998 | orchestrator | Sunday 01 June 2025 05:12:36 +0000 (0:00:00.564) 0:00:04.339 *********** 2025-06-01 05:12:47.559011 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.559024 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:47.559036 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:47.559049 | orchestrator | 2025-06-01 05:12:47.559061 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-01 05:12:47.559074 | orchestrator | Sunday 01 June 2025 05:12:36 +0000 (0:00:00.307) 0:00:04.647 *********** 2025-06-01 05:12:47.559087 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.559098 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:12:47.559111 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:12:47.559124 | orchestrator | 2025-06-01 05:12:47.559137 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-01 05:12:47.559149 | orchestrator | Sunday 01 June 2025 05:12:36 +0000 (0:00:00.297) 0:00:04.944 *********** 2025-06-01 05:12:47.559161 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.559174 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:12:47.559185 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:12:47.559196 | orchestrator | 2025-06-01 05:12:47.559209 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 05:12:47.559228 | orchestrator | Sunday 01 June 2025 05:12:37 +0000 (0:00:00.315) 0:00:05.260 *********** 2025-06-01 05:12:47.559246 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.559263 | orchestrator | 2025-06-01 05:12:47.559298 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 05:12:47.559318 | orchestrator | Sunday 01 June 2025 05:12:37 +0000 (0:00:00.730) 0:00:05.991 *********** 2025-06-01 05:12:47.559336 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.559354 | orchestrator | 2025-06-01 05:12:47.559372 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 05:12:47.559390 | orchestrator | Sunday 01 June 2025 05:12:38 +0000 (0:00:00.260) 0:00:06.251 *********** 2025-06-01 05:12:47.559409 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.559427 | orchestrator | 2025-06-01 05:12:47.559446 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:47.559463 | orchestrator | Sunday 01 June 2025 05:12:38 +0000 (0:00:00.256) 0:00:06.508 *********** 2025-06-01 05:12:47.559481 | orchestrator | 2025-06-01 05:12:47.559499 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:47.559517 | orchestrator | Sunday 01 June 2025 05:12:38 +0000 (0:00:00.085) 0:00:06.593 *********** 2025-06-01 05:12:47.559536 | orchestrator | 2025-06-01 05:12:47.559553 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:47.559569 | orchestrator | Sunday 01 June 2025 05:12:38 +0000 (0:00:00.073) 0:00:06.666 *********** 2025-06-01 05:12:47.559586 | orchestrator | 2025-06-01 05:12:47.559604 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 05:12:47.559622 | orchestrator | Sunday 01 June 2025 05:12:38 +0000 (0:00:00.085) 0:00:06.752 *********** 2025-06-01 05:12:47.559640 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.559660 | orchestrator | 2025-06-01 05:12:47.559678 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-01 05:12:47.559696 | orchestrator | Sunday 01 June 2025 05:12:38 +0000 (0:00:00.242) 0:00:06.994 *********** 2025-06-01 05:12:47.559715 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.559811 | orchestrator | 2025-06-01 05:12:47.559855 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-01 05:12:47.559875 | orchestrator | Sunday 01 June 2025 05:12:39 +0000 (0:00:00.242) 0:00:07.237 *********** 2025-06-01 05:12:47.559894 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.559914 | orchestrator | 2025-06-01 05:12:47.559931 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-01 05:12:47.559949 | orchestrator | Sunday 01 June 2025 05:12:39 +0000 (0:00:00.128) 0:00:07.365 *********** 2025-06-01 05:12:47.559984 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:12:47.560005 | orchestrator | 2025-06-01 05:12:47.560023 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-01 05:12:47.560043 | orchestrator | Sunday 01 June 2025 05:12:41 +0000 (0:00:01.916) 0:00:09.282 *********** 2025-06-01 05:12:47.560062 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.560080 | orchestrator | 2025-06-01 05:12:47.560099 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-01 05:12:47.560118 | orchestrator | Sunday 01 June 2025 05:12:41 +0000 (0:00:00.239) 0:00:09.522 *********** 2025-06-01 05:12:47.560136 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.560155 | orchestrator | 2025-06-01 05:12:47.560173 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-01 05:12:47.560193 | orchestrator | Sunday 01 June 2025 05:12:42 +0000 (0:00:00.941) 0:00:10.463 *********** 2025-06-01 05:12:47.560210 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.560232 | orchestrator | 2025-06-01 05:12:47.560251 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-01 05:12:47.560270 | orchestrator | Sunday 01 June 2025 05:12:42 +0000 (0:00:00.155) 0:00:10.618 *********** 2025-06-01 05:12:47.560288 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:12:47.560306 | orchestrator | 2025-06-01 05:12:47.560326 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-01 05:12:47.560346 | orchestrator | Sunday 01 June 2025 05:12:42 +0000 (0:00:00.158) 0:00:10.777 *********** 2025-06-01 05:12:47.560364 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.560384 | orchestrator | 2025-06-01 05:12:47.560396 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-01 05:12:47.560406 | orchestrator | Sunday 01 June 2025 05:12:42 +0000 (0:00:00.264) 0:00:11.041 *********** 2025-06-01 05:12:47.560417 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:12:47.560428 | orchestrator | 2025-06-01 05:12:47.560438 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 05:12:47.560449 | orchestrator | Sunday 01 June 2025 05:12:43 +0000 (0:00:00.236) 0:00:11.277 *********** 2025-06-01 05:12:47.560460 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.560470 | orchestrator | 2025-06-01 05:12:47.560481 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 05:12:47.560492 | orchestrator | Sunday 01 June 2025 05:12:44 +0000 (0:00:01.353) 0:00:12.630 *********** 2025-06-01 05:12:47.560502 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.560513 | orchestrator | 2025-06-01 05:12:47.560523 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 05:12:47.560539 | orchestrator | Sunday 01 June 2025 05:12:44 +0000 (0:00:00.252) 0:00:12.883 *********** 2025-06-01 05:12:47.560558 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.560576 | orchestrator | 2025-06-01 05:12:47.560594 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:47.560613 | orchestrator | Sunday 01 June 2025 05:12:45 +0000 (0:00:00.240) 0:00:13.123 *********** 2025-06-01 05:12:47.560633 | orchestrator | 2025-06-01 05:12:47.560651 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:47.560671 | orchestrator | Sunday 01 June 2025 05:12:45 +0000 (0:00:00.068) 0:00:13.192 *********** 2025-06-01 05:12:47.560690 | orchestrator | 2025-06-01 05:12:47.560710 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:12:47.560794 | orchestrator | Sunday 01 June 2025 05:12:45 +0000 (0:00:00.085) 0:00:13.277 *********** 2025-06-01 05:12:47.560808 | orchestrator | 2025-06-01 05:12:47.560819 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-01 05:12:47.560830 | orchestrator | Sunday 01 June 2025 05:12:45 +0000 (0:00:00.077) 0:00:13.354 *********** 2025-06-01 05:12:47.560841 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:47.560865 | orchestrator | 2025-06-01 05:12:47.560876 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 05:12:47.560887 | orchestrator | Sunday 01 June 2025 05:12:47 +0000 (0:00:01.870) 0:00:15.224 *********** 2025-06-01 05:12:47.560897 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-01 05:12:47.560910 | orchestrator |  "msg": [ 2025-06-01 05:12:47.560929 | orchestrator |  "Validator run completed.", 2025-06-01 05:12:47.560947 | orchestrator |  "You can find the report file here:", 2025-06-01 05:12:47.560966 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-01T05:12:32+00:00-report.json", 2025-06-01 05:12:47.560986 | orchestrator |  "on the following host:", 2025-06-01 05:12:47.561005 | orchestrator |  "testbed-manager" 2025-06-01 05:12:47.561023 | orchestrator |  ] 2025-06-01 05:12:47.561042 | orchestrator | } 2025-06-01 05:12:47.561061 | orchestrator | 2025-06-01 05:12:47.561080 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:12:47.561100 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 05:12:47.561121 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:47.561157 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:12:47.926078 | orchestrator | 2025-06-01 05:12:47.926193 | orchestrator | 2025-06-01 05:12:47.926217 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:12:47.926240 | orchestrator | Sunday 01 June 2025 05:12:47 +0000 (0:00:00.392) 0:00:15.617 *********** 2025-06-01 05:12:47.926260 | orchestrator | =============================================================================== 2025-06-01 05:12:47.926279 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.92s 2025-06-01 05:12:47.926298 | orchestrator | Write report file ------------------------------------------------------- 1.87s 2025-06-01 05:12:47.926318 | orchestrator | Aggregate test results step one ----------------------------------------- 1.35s 2025-06-01 05:12:47.926337 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.94s 2025-06-01 05:12:47.926355 | orchestrator | Get container info ------------------------------------------------------ 0.94s 2025-06-01 05:12:47.926372 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-06-01 05:12:47.926389 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2025-06-01 05:12:47.926406 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-06-01 05:12:47.926422 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2025-06-01 05:12:47.926439 | orchestrator | Print report file information ------------------------------------------- 0.39s 2025-06-01 05:12:47.926457 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-06-01 05:12:47.926474 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-06-01 05:12:47.926493 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-06-01 05:12:47.926509 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-06-01 05:12:47.926527 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-01 05:12:47.926546 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-06-01 05:12:47.926566 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-06-01 05:12:47.926584 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-06-01 05:12:47.926603 | orchestrator | Define report vars ------------------------------------------------------ 0.25s 2025-06-01 05:12:47.926654 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-06-01 05:12:48.202519 | orchestrator | + osism validate ceph-osds 2025-06-01 05:12:49.944084 | orchestrator | Registering Redlock._acquired_script 2025-06-01 05:12:49.944185 | orchestrator | Registering Redlock._extend_script 2025-06-01 05:12:49.944200 | orchestrator | Registering Redlock._release_script 2025-06-01 05:12:58.709434 | orchestrator | 2025-06-01 05:12:58.709553 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-01 05:12:58.709572 | orchestrator | 2025-06-01 05:12:58.709585 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-01 05:12:58.709597 | orchestrator | Sunday 01 June 2025 05:12:54 +0000 (0:00:00.450) 0:00:00.450 *********** 2025-06-01 05:12:58.709609 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:58.709620 | orchestrator | 2025-06-01 05:12:58.709631 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 05:12:58.709642 | orchestrator | Sunday 01 June 2025 05:12:54 +0000 (0:00:00.620) 0:00:01.071 *********** 2025-06-01 05:12:58.709653 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:58.709664 | orchestrator | 2025-06-01 05:12:58.709675 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-01 05:12:58.709705 | orchestrator | Sunday 01 June 2025 05:12:55 +0000 (0:00:00.451) 0:00:01.523 *********** 2025-06-01 05:12:58.709747 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:12:58.709759 | orchestrator | 2025-06-01 05:12:58.709775 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-01 05:12:58.709787 | orchestrator | Sunday 01 June 2025 05:12:56 +0000 (0:00:00.963) 0:00:02.486 *********** 2025-06-01 05:12:58.709798 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:12:58.709810 | orchestrator | 2025-06-01 05:12:58.709821 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-01 05:12:58.709833 | orchestrator | Sunday 01 June 2025 05:12:56 +0000 (0:00:00.134) 0:00:02.620 *********** 2025-06-01 05:12:58.709844 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:12:58.709855 | orchestrator | 2025-06-01 05:12:58.709866 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-01 05:12:58.709878 | orchestrator | Sunday 01 June 2025 05:12:56 +0000 (0:00:00.150) 0:00:02.771 *********** 2025-06-01 05:12:58.709889 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:12:58.709900 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:12:58.709911 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:12:58.709922 | orchestrator | 2025-06-01 05:12:58.709932 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-01 05:12:58.709943 | orchestrator | Sunday 01 June 2025 05:12:56 +0000 (0:00:00.312) 0:00:03.083 *********** 2025-06-01 05:12:58.709954 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:12:58.709965 | orchestrator | 2025-06-01 05:12:58.709976 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-01 05:12:58.709986 | orchestrator | Sunday 01 June 2025 05:12:57 +0000 (0:00:00.138) 0:00:03.222 *********** 2025-06-01 05:12:58.709997 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:12:58.710008 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:12:58.710116 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:12:58.710131 | orchestrator | 2025-06-01 05:12:58.710142 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-01 05:12:58.710153 | orchestrator | Sunday 01 June 2025 05:12:57 +0000 (0:00:00.311) 0:00:03.534 *********** 2025-06-01 05:12:58.710164 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:12:58.710176 | orchestrator | 2025-06-01 05:12:58.710195 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:12:58.710214 | orchestrator | Sunday 01 June 2025 05:12:57 +0000 (0:00:00.576) 0:00:04.110 *********** 2025-06-01 05:12:58.710232 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:12:58.710250 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:12:58.710267 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:12:58.710314 | orchestrator | 2025-06-01 05:12:58.710334 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-01 05:12:58.710352 | orchestrator | Sunday 01 June 2025 05:12:58 +0000 (0:00:00.487) 0:00:04.598 *********** 2025-06-01 05:12:58.710373 | orchestrator | skipping: [testbed-node-3] => (item={'id': '792622a1226d7e944ca4cd2822fdd74a52e67e0f89cc9c876a1b2ccbd46139e9', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-01 05:12:58.710395 | orchestrator | skipping: [testbed-node-3] => (item={'id': '08d89ff29ad50794a5624b06948b10b1289d4a91a2a63699fe9710798411c70b', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-01 05:12:58.710415 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f0ec36b380fdf6b37e58dc11fe40fc423afcb830fb290abccd6bf2642ca9f51e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-01 05:12:58.710444 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c5f86d0c84a368f3cb2435e9bf2bcfbb7f069bd8ebf24526a297c7c79a7e7c6e', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-01 05:12:58.710463 | orchestrator | skipping: [testbed-node-3] => (item={'id': '501fbda26ccf0b16a311a1ebcf5ce90c6ca0cc958e52dbb2314487531cb17df8', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-01 05:12:58.710505 | orchestrator | skipping: [testbed-node-3] => (item={'id': '50a0fd98650904a1874c8a01aed2e3e6f018933f077637510667b4d17583dbf8', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-01 05:12:58.710526 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e8a1bb7a1f95271ba92357dbebb5f1b4a5591452880417fffb90cbd12c4e1f6d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-01 05:12:58.710544 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4c74a251bd4fc9a59eb253051c3c338398133e9578041003b4445c039ff0a0c4', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-01 05:12:58.710571 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4ab6032e7f7c03bb8be0afa163ae397b95b42c2b17844007f00c36e68583f6be', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-01 05:12:58.710596 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c979c431f32026f95036161cf5cee51db98eabaca5d803e7bdbb50ad7a15cd50', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-01 05:12:58.710615 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ea87eaa002f1709693c5d9a8fa5f83566311596975145d2ecaa4d381b0ab6f3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-01 05:12:58.710635 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb1be3631ac29aefe94e07ea8abdfbaa2ebcf991463f3c524bbfa655989627b3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-01 05:12:58.710653 | orchestrator | ok: [testbed-node-3] => (item={'id': '820b240685b0c259fe08148a867545d3483af492e79828e9de2147d1af9a14c7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-01 05:12:58.710689 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c0b9b0fa24cb0797bd29369b6e446f674bb823c618926962dace9dbb1d70fcf8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-01 05:12:58.710709 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f6e3dd7071d84730bbe547d4eb6c486b62c1fbc564bd1ae5eae1ff3f53487f19', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-01 05:12:58.710756 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd29b071a241d8c25f6d9ba719393deede4ffd6c9774ae6ee09f6a092d33faca3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-01 05:12:58.710768 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a05c288374c73fe052bed21824afc6d67b642bcd3d6ea8b1567656bf1f9e7072', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-01 05:12:58.710780 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40858415bc5ec83b60773d067abf7abf189bb5a80066428a1e4cbf281195ae2b', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-01 05:12:58.710790 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd35aa5d78b888bb39720033571b05b24cc79b8673c1e8e0ad22a128ba3e8570d', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 05:12:58.710802 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aab08b497e2c43749dd0d749dd0428d3a9620ca3ad0d9934aebe030255ff9b59', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 05:12:58.710813 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1bbeddacbef0d172e4f723dba91ea763271cb142c1630447a7627577cd75cb4c', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-01 05:12:58.710835 | orchestrator | skipping: [testbed-node-4] => (item={'id': '41c19e993b5657f58cbe675b1888aa3ae98b77d8902e051d3a3dfe5967ce164c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-01 05:12:58.866948 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b6a7224012be1ed75c140579d9f9b278a689bc53d17b12d1eb9320d11cb3ff89', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-01 05:12:58.867073 | orchestrator | skipping: [testbed-node-4] => (item={'id': '050def7e5e7fb6b57954b0fa4b6363b56d81ac2a5841c0cbd65e370d13398d29', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-01 05:12:58.867106 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd9284247b5e9361f1476c7ea736f20166b3631aec0e6935ecfcdd6b3948cfec9', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-01 05:12:58.867122 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7679431630f733fef596bf1c282fb25367538974ff36b5be89c4d5571d213553', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-01 05:12:58.867148 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f775ed174d0fc5cd700f20dec8a1b1fb6ec77f4ebafadb4b41406a4d391a2b24', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-01 05:12:58.868111 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3eedc08875709fe78657c1fbee29633f073447a42abc56463fd771935817c6c2', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-01 05:12:58.868197 | orchestrator | skipping: [testbed-node-4] => (item={'id': '00d619f0bc20bf80b181bc674e5bae6037dbcf2802a0155a768ac14cf0d22d0d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-01 05:12:58.868218 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5380e4996f572fe3b20b509f907faea49eed3d0cd121f7f4502812f06c9385a0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-01 05:12:58.868235 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aed46b138bf97e95f3d24f181ec958eb50fb1be51c77b25de6fa5bb320673b03', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-01 05:12:58.868252 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'adf659f0b0591c25812f894cc0885773b04ad44a8aa627847971dcaa3cbbdf7e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-01 05:12:58.868271 | orchestrator | ok: [testbed-node-4] => (item={'id': '437078ab0bc8885659cd8a83dc7332f880eecbfa2452ae674a317b52f85c6a3d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-01 05:12:58.868288 | orchestrator | ok: [testbed-node-4] => (item={'id': '71885febad4a6aa5eb0b64c2f79d031481c27780f5396c1c32186b221644dab5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-01 05:12:58.868305 | orchestrator | skipping: [testbed-node-4] => (item={'id': '281f487ede1fdcaa6a84e75d7a6fc1e25b42fddd0f4520ce4ec64f0a6697af8f', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-01 05:12:58.868323 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a9372e490c68728317c6b0d8ab2c915c9327f77c305a59b4341f9379f4cae32', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-01 05:12:58.868341 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7871a263ea5159ca2df27f7c2064b2e6b522c8ffca0a162a8265ecb61acfd6cd', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-01 05:12:58.868384 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9216a2881fbdff6fc2afd462797ab61f1b7853585258b8fe700d03fc9a18bac5', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-01 05:12:58.868403 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9b54f2929d34d122ee584d187287ea924c643de56576e748679dbb1f35055a8d', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-01 05:12:58.868420 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e9c32156117c379b5b0fbdad409912b7d8247926c88ccacbf966f1b42e6542a2', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 05:12:58.868455 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b9cff276107eb884a2fa00a3423cf65364143da9e2726ad092ac261ba583c8a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-01 05:12:58.868473 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b74d0da428008dfe75849c9d09cfa93fc761eb78544f22418a5a706135e44e4d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-01 05:12:58.868503 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0b6a50df9da23d274002cc57516fb2992fc03a6f266401ede8d7be9f73fcf1a9', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-01 05:12:58.868520 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ab5d21f009dd457b2c206a3df2670e117115777d6486fa1326b4fbf222f92620', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-01 05:12:58.868537 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac1bac5df8058b9da8d78b3eaec731c41ff83f644fe37f117362f4d26ab81839', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-01 05:12:58.868554 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9a2d09a830d80e768d4facf45727c2a4aef18a7e560d736c8299b6f01d4a4e9', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-01 05:12:58.868571 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b87bdce7c4b97f77a5b29b3eda0613b612cea0bfd67b2c53758bcbb0c22f6465', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-01 05:12:58.868588 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1e8f4d9aeb32e18870acc9d3965675ec679264589b260d0e1c96724c744de0df', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-01 05:12:58.868605 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6405dee86b93b5a3bb4116e4a4a404e3a0d878d07433c830113f5caa130c5247', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-01 05:12:58.868621 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97c300295f7b2d528b4d64714e31908b54031f1783ef8afeb0ede311382933c8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-01 05:12:58.868639 | orchestrator | skipping: [testbed-node-5] => (item={'id': '71730ba8c3c38967c996762c99fd0bb1890adbb27741cd24d0ed25c41efa5cfd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-01 05:12:58.868656 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25d221d9cfe3f0e9752fdde33e50414c19ea2008d8899773ce3413c15946f335', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-01 05:12:58.868683 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c8e2213047e79309f49944e8e04465a0e5784a12306c1f93b1c8ed4fdf05a2e7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-01 05:13:07.506839 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd2a98faea7e8b601711ad8384afa71a040a554f70f7a151a34fbb5359187798b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-01 05:13:07.506973 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fdb5b36f1146200f5026b93076ccb34580f19f02933f6f9304e5124278be1ad4', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-01 05:13:07.507016 | orchestrator | skipping: [testbed-node-5] => (item={'id': '52ad66739c7a871dcbf556318b124e831b577936d361bfe87a3c9ded4d1ed76a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-01 05:13:07.507058 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd512af0e55223da7872cbdfd1bfeead866febc211a8461d06263ac0958791678', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-01 05:13:07.507075 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e460322a7d033393cee2edff099ed773bca0c5f856b70605ea9de3881c29675e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-01 05:13:07.507090 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8cb23f3f6d31e40ce842075bd607fd98b5bf7ae4426176f203ee3f2b595a999f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-01 05:13:07.507104 | orchestrator | skipping: [testbed-node-5] => (item={'id': '79522424445c9f52b2bd4b9a280e95404ec26a9a785ffd7aabc880c50d45892d', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 05:13:07.507118 | orchestrator | 2025-06-01 05:13:07.507134 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-01 05:13:07.507150 | orchestrator | Sunday 01 June 2025 05:12:58 +0000 (0:00:00.485) 0:00:05.084 *********** 2025-06-01 05:13:07.507164 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.507179 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.507192 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.507206 | orchestrator | 2025-06-01 05:13:07.507220 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-01 05:13:07.507234 | orchestrator | Sunday 01 June 2025 05:12:59 +0000 (0:00:00.336) 0:00:05.420 *********** 2025-06-01 05:13:07.507248 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.507263 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:07.507276 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:07.507290 | orchestrator | 2025-06-01 05:13:07.507304 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-01 05:13:07.507321 | orchestrator | Sunday 01 June 2025 05:12:59 +0000 (0:00:00.512) 0:00:05.932 *********** 2025-06-01 05:13:07.507334 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.507349 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.507365 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.507379 | orchestrator | 2025-06-01 05:13:07.507394 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:13:07.507408 | orchestrator | Sunday 01 June 2025 05:13:00 +0000 (0:00:00.317) 0:00:06.249 *********** 2025-06-01 05:13:07.507423 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.507436 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.507449 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.507463 | orchestrator | 2025-06-01 05:13:07.507476 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-01 05:13:07.507490 | orchestrator | Sunday 01 June 2025 05:13:00 +0000 (0:00:00.294) 0:00:06.544 *********** 2025-06-01 05:13:07.507503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-01 05:13:07.507518 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-01 05:13:07.507532 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.507544 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-01 05:13:07.507558 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-01 05:13:07.507571 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:07.507596 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-01 05:13:07.507611 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-01 05:13:07.507626 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:07.507640 | orchestrator | 2025-06-01 05:13:07.507654 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-01 05:13:07.507668 | orchestrator | Sunday 01 June 2025 05:13:00 +0000 (0:00:00.318) 0:00:06.863 *********** 2025-06-01 05:13:07.507682 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.507696 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.507710 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.507763 | orchestrator | 2025-06-01 05:13:07.507800 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-01 05:13:07.507816 | orchestrator | Sunday 01 June 2025 05:13:01 +0000 (0:00:00.504) 0:00:07.368 *********** 2025-06-01 05:13:07.507830 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.507844 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:07.507858 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:07.507872 | orchestrator | 2025-06-01 05:13:07.507887 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-01 05:13:07.507901 | orchestrator | Sunday 01 June 2025 05:13:01 +0000 (0:00:00.286) 0:00:07.654 *********** 2025-06-01 05:13:07.507916 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.507930 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:07.507944 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:07.507959 | orchestrator | 2025-06-01 05:13:07.507973 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-01 05:13:07.507987 | orchestrator | Sunday 01 June 2025 05:13:01 +0000 (0:00:00.293) 0:00:07.948 *********** 2025-06-01 05:13:07.508001 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.508024 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.508039 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.508054 | orchestrator | 2025-06-01 05:13:07.508068 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 05:13:07.508083 | orchestrator | Sunday 01 June 2025 05:13:02 +0000 (0:00:00.326) 0:00:08.275 *********** 2025-06-01 05:13:07.508097 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.508111 | orchestrator | 2025-06-01 05:13:07.508126 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 05:13:07.508141 | orchestrator | Sunday 01 June 2025 05:13:02 +0000 (0:00:00.699) 0:00:08.974 *********** 2025-06-01 05:13:07.508155 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.508169 | orchestrator | 2025-06-01 05:13:07.508184 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 05:13:07.508199 | orchestrator | Sunday 01 June 2025 05:13:03 +0000 (0:00:00.236) 0:00:09.211 *********** 2025-06-01 05:13:07.508213 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.508228 | orchestrator | 2025-06-01 05:13:07.508242 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:13:07.508257 | orchestrator | Sunday 01 June 2025 05:13:03 +0000 (0:00:00.259) 0:00:09.470 *********** 2025-06-01 05:13:07.508272 | orchestrator | 2025-06-01 05:13:07.508287 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:13:07.508301 | orchestrator | Sunday 01 June 2025 05:13:03 +0000 (0:00:00.075) 0:00:09.545 *********** 2025-06-01 05:13:07.508315 | orchestrator | 2025-06-01 05:13:07.508329 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:13:07.508343 | orchestrator | Sunday 01 June 2025 05:13:03 +0000 (0:00:00.070) 0:00:09.615 *********** 2025-06-01 05:13:07.508357 | orchestrator | 2025-06-01 05:13:07.508370 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 05:13:07.508384 | orchestrator | Sunday 01 June 2025 05:13:03 +0000 (0:00:00.071) 0:00:09.687 *********** 2025-06-01 05:13:07.508397 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.508420 | orchestrator | 2025-06-01 05:13:07.508434 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-01 05:13:07.508449 | orchestrator | Sunday 01 June 2025 05:13:03 +0000 (0:00:00.253) 0:00:09.941 *********** 2025-06-01 05:13:07.508463 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.508478 | orchestrator | 2025-06-01 05:13:07.508493 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:13:07.508507 | orchestrator | Sunday 01 June 2025 05:13:04 +0000 (0:00:00.224) 0:00:10.165 *********** 2025-06-01 05:13:07.508521 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.508536 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.508551 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.508566 | orchestrator | 2025-06-01 05:13:07.508580 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-01 05:13:07.508595 | orchestrator | Sunday 01 June 2025 05:13:04 +0000 (0:00:00.296) 0:00:10.462 *********** 2025-06-01 05:13:07.508609 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.508623 | orchestrator | 2025-06-01 05:13:07.508638 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-01 05:13:07.508653 | orchestrator | Sunday 01 June 2025 05:13:04 +0000 (0:00:00.674) 0:00:11.137 *********** 2025-06-01 05:13:07.508667 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 05:13:07.508682 | orchestrator | 2025-06-01 05:13:07.508697 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-01 05:13:07.508711 | orchestrator | Sunday 01 June 2025 05:13:06 +0000 (0:00:01.500) 0:00:12.637 *********** 2025-06-01 05:13:07.508757 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.508773 | orchestrator | 2025-06-01 05:13:07.508789 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-01 05:13:07.508805 | orchestrator | Sunday 01 June 2025 05:13:06 +0000 (0:00:00.153) 0:00:12.791 *********** 2025-06-01 05:13:07.508821 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.508837 | orchestrator | 2025-06-01 05:13:07.508854 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-01 05:13:07.508870 | orchestrator | Sunday 01 June 2025 05:13:06 +0000 (0:00:00.304) 0:00:13.095 *********** 2025-06-01 05:13:07.508886 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:07.508902 | orchestrator | 2025-06-01 05:13:07.508918 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-01 05:13:07.508934 | orchestrator | Sunday 01 June 2025 05:13:07 +0000 (0:00:00.126) 0:00:13.222 *********** 2025-06-01 05:13:07.508950 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.508966 | orchestrator | 2025-06-01 05:13:07.508982 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:13:07.508998 | orchestrator | Sunday 01 June 2025 05:13:07 +0000 (0:00:00.133) 0:00:13.355 *********** 2025-06-01 05:13:07.509015 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:07.509031 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:07.509048 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:07.509063 | orchestrator | 2025-06-01 05:13:07.509079 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-01 05:13:07.509107 | orchestrator | Sunday 01 June 2025 05:13:07 +0000 (0:00:00.293) 0:00:13.649 *********** 2025-06-01 05:13:19.943135 | orchestrator | changed: [testbed-node-3] 2025-06-01 05:13:19.943267 | orchestrator | changed: [testbed-node-4] 2025-06-01 05:13:19.943284 | orchestrator | changed: [testbed-node-5] 2025-06-01 05:13:19.943304 | orchestrator | 2025-06-01 05:13:19.943318 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-01 05:13:19.943331 | orchestrator | Sunday 01 June 2025 05:13:10 +0000 (0:00:02.569) 0:00:16.218 *********** 2025-06-01 05:13:19.943343 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.943355 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.943366 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.943377 | orchestrator | 2025-06-01 05:13:19.943388 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-01 05:13:19.943422 | orchestrator | Sunday 01 June 2025 05:13:10 +0000 (0:00:00.305) 0:00:16.523 *********** 2025-06-01 05:13:19.943434 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.943444 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.943467 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.943479 | orchestrator | 2025-06-01 05:13:19.943490 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-01 05:13:19.943501 | orchestrator | Sunday 01 June 2025 05:13:10 +0000 (0:00:00.479) 0:00:17.003 *********** 2025-06-01 05:13:19.943512 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:19.943523 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:19.943533 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:19.943544 | orchestrator | 2025-06-01 05:13:19.943555 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-01 05:13:19.943566 | orchestrator | Sunday 01 June 2025 05:13:11 +0000 (0:00:00.340) 0:00:17.344 *********** 2025-06-01 05:13:19.943576 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.943587 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.943597 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.943608 | orchestrator | 2025-06-01 05:13:19.943619 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-01 05:13:19.943630 | orchestrator | Sunday 01 June 2025 05:13:11 +0000 (0:00:00.535) 0:00:17.879 *********** 2025-06-01 05:13:19.943641 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:19.943652 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:19.943663 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:19.943676 | orchestrator | 2025-06-01 05:13:19.943690 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-01 05:13:19.943702 | orchestrator | Sunday 01 June 2025 05:13:12 +0000 (0:00:00.289) 0:00:18.169 *********** 2025-06-01 05:13:19.943738 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:19.943750 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:19.943762 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:19.943775 | orchestrator | 2025-06-01 05:13:19.943788 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 05:13:19.943801 | orchestrator | Sunday 01 June 2025 05:13:12 +0000 (0:00:00.285) 0:00:18.455 *********** 2025-06-01 05:13:19.943813 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.943826 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.943839 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.943851 | orchestrator | 2025-06-01 05:13:19.943863 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-01 05:13:19.943877 | orchestrator | Sunday 01 June 2025 05:13:12 +0000 (0:00:00.464) 0:00:18.919 *********** 2025-06-01 05:13:19.943890 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.943902 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.943914 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.943926 | orchestrator | 2025-06-01 05:13:19.943939 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-01 05:13:19.943952 | orchestrator | Sunday 01 June 2025 05:13:13 +0000 (0:00:00.775) 0:00:19.695 *********** 2025-06-01 05:13:19.943965 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.943978 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.943990 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.944003 | orchestrator | 2025-06-01 05:13:19.944061 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-01 05:13:19.944075 | orchestrator | Sunday 01 June 2025 05:13:13 +0000 (0:00:00.306) 0:00:20.001 *********** 2025-06-01 05:13:19.944086 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:19.944097 | orchestrator | skipping: [testbed-node-4] 2025-06-01 05:13:19.944107 | orchestrator | skipping: [testbed-node-5] 2025-06-01 05:13:19.944118 | orchestrator | 2025-06-01 05:13:19.944129 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-01 05:13:19.944141 | orchestrator | Sunday 01 June 2025 05:13:14 +0000 (0:00:00.299) 0:00:20.301 *********** 2025-06-01 05:13:19.944161 | orchestrator | ok: [testbed-node-3] 2025-06-01 05:13:19.944171 | orchestrator | ok: [testbed-node-4] 2025-06-01 05:13:19.944182 | orchestrator | ok: [testbed-node-5] 2025-06-01 05:13:19.944192 | orchestrator | 2025-06-01 05:13:19.944203 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-01 05:13:19.944214 | orchestrator | Sunday 01 June 2025 05:13:14 +0000 (0:00:00.314) 0:00:20.615 *********** 2025-06-01 05:13:19.944224 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:13:19.944235 | orchestrator | 2025-06-01 05:13:19.944246 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-01 05:13:19.944256 | orchestrator | Sunday 01 June 2025 05:13:15 +0000 (0:00:00.724) 0:00:21.340 *********** 2025-06-01 05:13:19.944267 | orchestrator | skipping: [testbed-node-3] 2025-06-01 05:13:19.944277 | orchestrator | 2025-06-01 05:13:19.944288 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 05:13:19.944299 | orchestrator | Sunday 01 June 2025 05:13:15 +0000 (0:00:00.242) 0:00:21.583 *********** 2025-06-01 05:13:19.944309 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:13:19.944320 | orchestrator | 2025-06-01 05:13:19.944331 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 05:13:19.944341 | orchestrator | Sunday 01 June 2025 05:13:17 +0000 (0:00:01.582) 0:00:23.165 *********** 2025-06-01 05:13:19.944352 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:13:19.944362 | orchestrator | 2025-06-01 05:13:19.944373 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 05:13:19.944384 | orchestrator | Sunday 01 June 2025 05:13:17 +0000 (0:00:00.245) 0:00:23.411 *********** 2025-06-01 05:13:19.944412 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:13:19.944423 | orchestrator | 2025-06-01 05:13:19.944435 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:13:19.944445 | orchestrator | Sunday 01 June 2025 05:13:17 +0000 (0:00:00.258) 0:00:23.669 *********** 2025-06-01 05:13:19.944456 | orchestrator | 2025-06-01 05:13:19.944467 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:13:19.944477 | orchestrator | Sunday 01 June 2025 05:13:17 +0000 (0:00:00.067) 0:00:23.737 *********** 2025-06-01 05:13:19.944488 | orchestrator | 2025-06-01 05:13:19.944498 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 05:13:19.944509 | orchestrator | Sunday 01 June 2025 05:13:17 +0000 (0:00:00.065) 0:00:23.802 *********** 2025-06-01 05:13:19.944520 | orchestrator | 2025-06-01 05:13:19.944530 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-01 05:13:19.944546 | orchestrator | Sunday 01 June 2025 05:13:17 +0000 (0:00:00.068) 0:00:23.871 *********** 2025-06-01 05:13:19.944557 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 05:13:19.944567 | orchestrator | 2025-06-01 05:13:19.944578 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 05:13:19.944589 | orchestrator | Sunday 01 June 2025 05:13:18 +0000 (0:00:01.270) 0:00:25.142 *********** 2025-06-01 05:13:19.944599 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-01 05:13:19.944610 | orchestrator |  "msg": [ 2025-06-01 05:13:19.944621 | orchestrator |  "Validator run completed.", 2025-06-01 05:13:19.944632 | orchestrator |  "You can find the report file here:", 2025-06-01 05:13:19.944643 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-01T05:12:54+00:00-report.json", 2025-06-01 05:13:19.944654 | orchestrator |  "on the following host:", 2025-06-01 05:13:19.944664 | orchestrator |  "testbed-manager" 2025-06-01 05:13:19.944675 | orchestrator |  ] 2025-06-01 05:13:19.944687 | orchestrator | } 2025-06-01 05:13:19.944706 | orchestrator | 2025-06-01 05:13:19.944781 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:13:19.944801 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-01 05:13:19.944831 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 05:13:19.944851 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 05:13:19.944870 | orchestrator | 2025-06-01 05:13:19.944890 | orchestrator | 2025-06-01 05:13:19.944908 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:13:19.944923 | orchestrator | Sunday 01 June 2025 05:13:19 +0000 (0:00:00.612) 0:00:25.754 *********** 2025-06-01 05:13:19.944934 | orchestrator | =============================================================================== 2025-06-01 05:13:19.944944 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.57s 2025-06-01 05:13:19.944955 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-01 05:13:19.944965 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.50s 2025-06-01 05:13:19.944976 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-06-01 05:13:19.944986 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-06-01 05:13:19.944997 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.78s 2025-06-01 05:13:19.945007 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.72s 2025-06-01 05:13:19.945018 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2025-06-01 05:13:19.945028 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.67s 2025-06-01 05:13:19.945039 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-06-01 05:13:19.945049 | orchestrator | Print report file information ------------------------------------------- 0.61s 2025-06-01 05:13:19.945060 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.58s 2025-06-01 05:13:19.945071 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.54s 2025-06-01 05:13:19.945081 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.51s 2025-06-01 05:13:19.945092 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2025-06-01 05:13:19.945102 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2025-06-01 05:13:19.945113 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2025-06-01 05:13:19.945123 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-06-01 05:13:19.945134 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-06-01 05:13:19.945145 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.45s 2025-06-01 05:13:20.225119 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-01 05:13:20.235841 | orchestrator | + set -e 2025-06-01 05:13:20.235898 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 05:13:20.235906 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 05:13:20.235913 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 05:13:20.235919 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 05:13:20.235924 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 05:13:20.235931 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 05:13:20.235938 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 05:13:20.235944 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 05:13:20.235949 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 05:13:20.235955 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 05:13:20.235961 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 05:13:20.235967 | orchestrator | ++ export ARA=false 2025-06-01 05:13:20.235973 | orchestrator | ++ ARA=false 2025-06-01 05:13:20.235978 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 05:13:20.235984 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 05:13:20.235990 | orchestrator | ++ export TEMPEST=true 2025-06-01 05:13:20.236014 | orchestrator | ++ TEMPEST=true 2025-06-01 05:13:20.236020 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 05:13:20.236026 | orchestrator | ++ IS_ZUUL=true 2025-06-01 05:13:20.236031 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 05:13:20.236037 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.201 2025-06-01 05:13:20.236043 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 05:13:20.236049 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 05:13:20.236054 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 05:13:20.236060 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 05:13:20.236065 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 05:13:20.236071 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 05:13:20.236077 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 05:13:20.236083 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 05:13:20.236089 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-01 05:13:20.236095 | orchestrator | + source /etc/os-release 2025-06-01 05:13:20.236100 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-01 05:13:20.236106 | orchestrator | ++ NAME=Ubuntu 2025-06-01 05:13:20.236123 | orchestrator | ++ VERSION_ID=24.04 2025-06-01 05:13:20.236133 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-01 05:13:20.236142 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-01 05:13:20.236151 | orchestrator | ++ ID=ubuntu 2025-06-01 05:13:20.236160 | orchestrator | ++ ID_LIKE=debian 2025-06-01 05:13:20.236170 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-01 05:13:20.236178 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-01 05:13:20.236187 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-01 05:13:20.236197 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-01 05:13:20.236207 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-01 05:13:20.236216 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-01 05:13:20.236224 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-01 05:13:20.236235 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-01 05:13:20.236245 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-01 05:13:20.261329 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-01 05:13:43.273197 | orchestrator | 2025-06-01 05:13:43.273337 | orchestrator | # Status of Elasticsearch 2025-06-01 05:13:43.273363 | orchestrator | 2025-06-01 05:13:43.273383 | orchestrator | + pushd /opt/configuration/contrib 2025-06-01 05:13:43.273404 | orchestrator | + echo 2025-06-01 05:13:43.273422 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-01 05:13:43.273441 | orchestrator | + echo 2025-06-01 05:13:43.273460 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-01 05:13:43.466283 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-01 05:13:43.468221 | orchestrator | 2025-06-01 05:13:43.468278 | orchestrator | # Status of MariaDB 2025-06-01 05:13:43.468293 | orchestrator | 2025-06-01 05:13:43.468305 | orchestrator | + echo 2025-06-01 05:13:43.468317 | orchestrator | + echo '# Status of MariaDB' 2025-06-01 05:13:43.468328 | orchestrator | + echo 2025-06-01 05:13:43.468339 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-01 05:13:43.468352 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-01 05:13:43.536953 | orchestrator | Reading package lists... 2025-06-01 05:13:43.903386 | orchestrator | Building dependency tree... 2025-06-01 05:13:43.903924 | orchestrator | Reading state information... 2025-06-01 05:13:44.315887 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-01 05:13:44.315995 | orchestrator | bc set to manually installed. 2025-06-01 05:13:44.316011 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-01 05:13:44.953081 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-01 05:13:44.954007 | orchestrator | 2025-06-01 05:13:44.954101 | orchestrator | # Status of Prometheus 2025-06-01 05:13:44.954116 | orchestrator | 2025-06-01 05:13:44.954128 | orchestrator | + echo 2025-06-01 05:13:44.954139 | orchestrator | + echo '# Status of Prometheus' 2025-06-01 05:13:44.954151 | orchestrator | + echo 2025-06-01 05:13:44.954189 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-01 05:13:45.006077 | orchestrator | Unauthorized 2025-06-01 05:13:45.009280 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-01 05:13:45.070321 | orchestrator | Unauthorized 2025-06-01 05:13:45.072983 | orchestrator | 2025-06-01 05:13:45.073030 | orchestrator | # Status of RabbitMQ 2025-06-01 05:13:45.073043 | orchestrator | 2025-06-01 05:13:45.073055 | orchestrator | + echo 2025-06-01 05:13:45.073066 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-01 05:13:45.073077 | orchestrator | + echo 2025-06-01 05:13:45.073089 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-01 05:13:45.508050 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-01 05:13:45.519819 | orchestrator | 2025-06-01 05:13:45.519909 | orchestrator | # Status of Redis 2025-06-01 05:13:45.519931 | orchestrator | 2025-06-01 05:13:45.519950 | orchestrator | + echo 2025-06-01 05:13:45.519966 | orchestrator | + echo '# Status of Redis' 2025-06-01 05:13:45.519984 | orchestrator | + echo 2025-06-01 05:13:45.520003 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-01 05:13:45.525089 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001690s;;;0.000000;10.000000 2025-06-01 05:13:45.525168 | orchestrator | 2025-06-01 05:13:45.525192 | orchestrator | # Create backup of MariaDB database 2025-06-01 05:13:45.525214 | orchestrator | 2025-06-01 05:13:45.525232 | orchestrator | + popd 2025-06-01 05:13:45.525244 | orchestrator | + echo 2025-06-01 05:13:45.525255 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-01 05:13:45.525266 | orchestrator | + echo 2025-06-01 05:13:45.525278 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-01 05:13:47.333388 | orchestrator | 2025-06-01 05:13:47 | INFO  | Task 92d9dcbc-9ac5-44ac-abf6-cefa438b6c7e (mariadb_backup) was prepared for execution. 2025-06-01 05:13:47.333501 | orchestrator | 2025-06-01 05:13:47 | INFO  | It takes a moment until task 92d9dcbc-9ac5-44ac-abf6-cefa438b6c7e (mariadb_backup) has been started and output is visible here. 2025-06-01 05:13:51.241610 | orchestrator | 2025-06-01 05:13:51.241768 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:13:51.241788 | orchestrator | 2025-06-01 05:13:51.241801 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:13:51.242316 | orchestrator | Sunday 01 June 2025 05:13:51 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-01 05:13:51.437414 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:13:51.572917 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:13:51.573022 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:13:51.574013 | orchestrator | 2025-06-01 05:13:51.575450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:13:51.576080 | orchestrator | Sunday 01 June 2025 05:13:51 +0000 (0:00:00.331) 0:00:00.512 *********** 2025-06-01 05:13:52.171942 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 05:13:52.172531 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 05:13:52.173381 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 05:13:52.173563 | orchestrator | 2025-06-01 05:13:52.173957 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 05:13:52.174347 | orchestrator | 2025-06-01 05:13:52.178148 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 05:13:52.178213 | orchestrator | Sunday 01 June 2025 05:13:52 +0000 (0:00:00.602) 0:00:01.115 *********** 2025-06-01 05:13:52.557281 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 05:13:52.557523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 05:13:52.558580 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 05:13:52.559308 | orchestrator | 2025-06-01 05:13:52.560418 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 05:13:52.560885 | orchestrator | Sunday 01 June 2025 05:13:52 +0000 (0:00:00.383) 0:00:01.499 *********** 2025-06-01 05:13:53.099569 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:13:53.099877 | orchestrator | 2025-06-01 05:13:53.100999 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-01 05:13:53.105286 | orchestrator | Sunday 01 June 2025 05:13:53 +0000 (0:00:00.541) 0:00:02.040 *********** 2025-06-01 05:13:56.217308 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:13:56.219395 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:13:56.219431 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:13:56.221381 | orchestrator | 2025-06-01 05:13:56.221406 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-01 05:13:56.222345 | orchestrator | Sunday 01 June 2025 05:13:56 +0000 (0:00:03.112) 0:00:05.153 *********** 2025-06-01 05:15:17.675855 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 05:15:17.675984 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-01 05:15:17.676010 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 05:15:17.676023 | orchestrator | mariadb_bootstrap_restart 2025-06-01 05:15:17.742818 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:17.743022 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:17.744271 | orchestrator | changed: [testbed-node-0] 2025-06-01 05:15:17.745923 | orchestrator | 2025-06-01 05:15:17.746926 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 05:15:17.748329 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:17.749146 | orchestrator | 2025-06-01 05:15:17.749766 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 05:15:17.750435 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:17.750828 | orchestrator | 2025-06-01 05:15:17.751638 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 05:15:17.752325 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:17.752704 | orchestrator | 2025-06-01 05:15:17.752931 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 05:15:17.753826 | orchestrator | 2025-06-01 05:15:17.754515 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 05:15:17.755869 | orchestrator | Sunday 01 June 2025 05:15:17 +0000 (0:01:21.532) 0:01:26.685 *********** 2025-06-01 05:15:17.929498 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:15:18.048758 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:18.050251 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:18.051178 | orchestrator | 2025-06-01 05:15:18.051938 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 05:15:18.052864 | orchestrator | Sunday 01 June 2025 05:15:18 +0000 (0:00:00.305) 0:01:26.991 *********** 2025-06-01 05:15:18.413824 | orchestrator | skipping: [testbed-node-0] 2025-06-01 05:15:18.461622 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:18.461897 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:18.462792 | orchestrator | 2025-06-01 05:15:18.463697 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:15:18.464187 | orchestrator | 2025-06-01 05:15:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 05:15:18.464615 | orchestrator | 2025-06-01 05:15:18 | INFO  | Please wait and do not abort execution. 2025-06-01 05:15:18.465443 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 05:15:18.467349 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 05:15:18.468193 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 05:15:18.469846 | orchestrator | 2025-06-01 05:15:18.470306 | orchestrator | 2025-06-01 05:15:18.470843 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:15:18.471729 | orchestrator | Sunday 01 June 2025 05:15:18 +0000 (0:00:00.413) 0:01:27.404 *********** 2025-06-01 05:15:18.471751 | orchestrator | =============================================================================== 2025-06-01 05:15:18.472203 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 81.53s 2025-06-01 05:15:18.472519 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.11s 2025-06-01 05:15:18.473755 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-06-01 05:15:18.474109 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-06-01 05:15:18.474151 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2025-06-01 05:15:18.474257 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2025-06-01 05:15:18.474664 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-01 05:15:18.475000 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-06-01 05:15:19.114274 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-06-01 05:15:20.836209 | orchestrator | 2025-06-01 05:15:20 | INFO  | Task e3ebdab0-551d-46cf-a643-a93454f9873e (mariadb_backup) was prepared for execution. 2025-06-01 05:15:20.836363 | orchestrator | 2025-06-01 05:15:20 | INFO  | It takes a moment until task e3ebdab0-551d-46cf-a643-a93454f9873e (mariadb_backup) has been started and output is visible here. 2025-06-01 05:15:24.823538 | orchestrator | 2025-06-01 05:15:24.828041 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:15:24.828926 | orchestrator | 2025-06-01 05:15:24.831917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:15:24.832517 | orchestrator | Sunday 01 June 2025 05:15:24 +0000 (0:00:00.182) 0:00:00.183 *********** 2025-06-01 05:15:25.018457 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:15:25.136521 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:15:25.136703 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:15:25.137813 | orchestrator | 2025-06-01 05:15:25.138493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:15:25.139270 | orchestrator | Sunday 01 June 2025 05:15:25 +0000 (0:00:00.317) 0:00:00.500 *********** 2025-06-01 05:15:25.773060 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 05:15:25.773166 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 05:15:25.773584 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 05:15:25.774275 | orchestrator | 2025-06-01 05:15:25.775111 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 05:15:25.775627 | orchestrator | 2025-06-01 05:15:25.776347 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 05:15:25.776555 | orchestrator | Sunday 01 June 2025 05:15:25 +0000 (0:00:00.635) 0:00:01.136 *********** 2025-06-01 05:15:26.192122 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 05:15:26.193258 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 05:15:26.193359 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 05:15:26.194472 | orchestrator | 2025-06-01 05:15:26.197083 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 05:15:26.197125 | orchestrator | Sunday 01 June 2025 05:15:26 +0000 (0:00:00.418) 0:00:01.555 *********** 2025-06-01 05:15:26.705897 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:15:26.706184 | orchestrator | 2025-06-01 05:15:26.707384 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-01 05:15:26.711416 | orchestrator | Sunday 01 June 2025 05:15:26 +0000 (0:00:00.513) 0:00:02.068 *********** 2025-06-01 05:15:29.896863 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:15:29.897410 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:15:29.899467 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:15:29.900575 | orchestrator | 2025-06-01 05:15:29.902109 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-06-01 05:15:29.902437 | orchestrator | Sunday 01 June 2025 05:15:29 +0000 (0:00:03.189) 0:00:05.257 *********** 2025-06-01 05:15:34.633282 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:34.637403 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:34.637580 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-06-01 05:15:33 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-06-01 05:15:33 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-06-01 05:15:33 incremental backup from 0 is enabled.\n[00] 2025-06-01 05:15:33 uses posix_fadvise().\n[00] 2025-06-01 05:15:33 cd to /var/lib/mysql/\n[00] 2025-06-01 05:15:33 open files limit requested 0, set to 1048576\n[00] 2025-06-01 05:15:33 mariabackup: using the following InnoDB configuration:\n[00] 2025-06-01 05:15:33 innodb_data_home_dir = \n[00] 2025-06-01 05:15:33 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-06-01 05:15:33 innodb_log_group_home_dir = ./\n[00] 2025-06-01 05:15:33 InnoDB: Using liburing\n2025-06-01 5:15:34 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-06-01 5:15:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-06-01 5:15:34 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250601 5:15:34 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x55b85f3703ae]\nmariabackup(handle_fatal_signal+0x229)[0x55b85ee936d9]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x75806cded050]\nmariabackup(server_mysql_fetch_row+0x14)[0x55b85eadf474]\nmariabackup(+0x76ca87)[0x55b85eab1a87]\nmariabackup(+0x75f37a)[0x55b85eaa437a]\nmariabackup(main+0x163)[0x55b85ea49053]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x75806cdd824a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x75806cdd8305]\nmariabackup(_start+0x21)[0x55b85ea8e161]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128063 128063 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E\n\nKernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-06-01 05:15:33 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-06-01 05:15:33 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-06-01 05:15:33 incremental backup from 0 is enabled.", "[00] 2025-06-01 05:15:33 uses posix_fadvise().", "[00] 2025-06-01 05:15:33 cd to /var/lib/mysql/", "[00] 2025-06-01 05:15:33 open files limit requested 0, set to 1048576", "[00] 2025-06-01 05:15:33 mariabackup: using the following InnoDB configuration:", "[00] 2025-06-01 05:15:33 innodb_data_home_dir = ", "[00] 2025-06-01 05:15:33 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-06-01 05:15:33 innodb_log_group_home_dir = ./", "[00] 2025-06-01 05:15:33 InnoDB: Using liburing", "2025-06-01 5:15:34 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-06-01 5:15:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-06-01 5:15:34 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250601 5:15:34 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x55b85f3703ae]", "mariabackup(handle_fatal_signal+0x229)[0x55b85ee936d9]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x75806cded050]", "mariabackup(server_mysql_fetch_row+0x14)[0x55b85eadf474]", "mariabackup(+0x76ca87)[0x55b85eab1a87]", "mariabackup(+0x75f37a)[0x55b85eaa437a]", "mariabackup(main+0x163)[0x55b85ea49053]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x75806cdd824a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x75806cdd8305]", "mariabackup(_start+0x21)[0x55b85ea8e161]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128063 128063 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E", "", "Kernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-06-01 05:15:34.791774 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 05:15:34.792814 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-01 05:15:34.794123 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 05:15:34.795063 | orchestrator | mariadb_bootstrap_restart 2025-06-01 05:15:34.868492 | orchestrator | 2025-06-01 05:15:34.869108 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 05:15:34.870411 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:34.874316 | orchestrator | 2025-06-01 05:15:34.874377 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 05:15:34.874388 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:34.874397 | orchestrator | 2025-06-01 05:15:34.875030 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 05:15:34.875469 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:34.876281 | orchestrator | 2025-06-01 05:15:34.877338 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 05:15:34.877888 | orchestrator | 2025-06-01 05:15:34.878444 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 05:15:34.879080 | orchestrator | Sunday 01 June 2025 05:15:34 +0000 (0:00:04.974) 0:00:10.232 *********** 2025-06-01 05:15:35.090626 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:35.091359 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:35.092556 | orchestrator | 2025-06-01 05:15:35.093496 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 05:15:35.094875 | orchestrator | Sunday 01 June 2025 05:15:35 +0000 (0:00:00.220) 0:00:10.452 *********** 2025-06-01 05:15:35.230449 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:35.231652 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:35.232441 | orchestrator | 2025-06-01 05:15:35.233956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:15:35.234246 | orchestrator | 2025-06-01 05:15:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 05:15:35.234514 | orchestrator | 2025-06-01 05:15:35 | INFO  | Please wait and do not abort execution. 2025-06-01 05:15:35.235333 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-01 05:15:35.236045 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 05:15:35.236517 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 05:15:35.237125 | orchestrator | 2025-06-01 05:15:35.237984 | orchestrator | 2025-06-01 05:15:35.238578 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:15:35.239028 | orchestrator | Sunday 01 June 2025 05:15:35 +0000 (0:00:00.141) 0:00:10.594 *********** 2025-06-01 05:15:35.239532 | orchestrator | =============================================================================== 2025-06-01 05:15:35.240001 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.97s 2025-06-01 05:15:35.240435 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.19s 2025-06-01 05:15:35.241002 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-06-01 05:15:35.241419 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-06-01 05:15:35.242118 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-06-01 05:15:35.242573 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-01 05:15:35.243091 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.22s 2025-06-01 05:15:35.243521 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.14s 2025-06-01 05:15:35.700208 | orchestrator | 2025-06-01 05:15:35 | INFO  | Task 2c6a1a8e-689b-4e52-998d-1679143a902e (mariadb_backup) was prepared for execution. 2025-06-01 05:15:35.700332 | orchestrator | 2025-06-01 05:15:35 | INFO  | It takes a moment until task 2c6a1a8e-689b-4e52-998d-1679143a902e (mariadb_backup) has been started and output is visible here. 2025-06-01 05:15:39.687155 | orchestrator | 2025-06-01 05:15:39.689212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 05:15:39.691195 | orchestrator | 2025-06-01 05:15:39.692200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 05:15:39.693934 | orchestrator | Sunday 01 June 2025 05:15:39 +0000 (0:00:00.178) 0:00:00.178 *********** 2025-06-01 05:15:39.880897 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:15:40.029353 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:15:40.030387 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:15:40.034634 | orchestrator | 2025-06-01 05:15:40.035912 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 05:15:40.036883 | orchestrator | Sunday 01 June 2025 05:15:40 +0000 (0:00:00.343) 0:00:00.522 *********** 2025-06-01 05:15:40.626631 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 05:15:40.627973 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 05:15:40.629453 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 05:15:40.630958 | orchestrator | 2025-06-01 05:15:40.631649 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 05:15:40.632375 | orchestrator | 2025-06-01 05:15:40.633171 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 05:15:40.633634 | orchestrator | Sunday 01 June 2025 05:15:40 +0000 (0:00:00.598) 0:00:01.120 *********** 2025-06-01 05:15:41.034596 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 05:15:41.035248 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 05:15:41.035805 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 05:15:41.036921 | orchestrator | 2025-06-01 05:15:41.037571 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 05:15:41.038350 | orchestrator | Sunday 01 June 2025 05:15:41 +0000 (0:00:00.406) 0:00:01.527 *********** 2025-06-01 05:15:41.573788 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 05:15:41.573977 | orchestrator | 2025-06-01 05:15:41.575207 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-01 05:15:41.578154 | orchestrator | Sunday 01 June 2025 05:15:41 +0000 (0:00:00.540) 0:00:02.067 *********** 2025-06-01 05:15:44.830171 | orchestrator | ok: [testbed-node-0] 2025-06-01 05:15:44.830358 | orchestrator | ok: [testbed-node-1] 2025-06-01 05:15:44.833994 | orchestrator | ok: [testbed-node-2] 2025-06-01 05:15:44.834092 | orchestrator | 2025-06-01 05:15:44.834104 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-06-01 05:15:44.835090 | orchestrator | Sunday 01 June 2025 05:15:44 +0000 (0:00:03.253) 0:00:05.320 *********** 2025-06-01 05:15:49.311950 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:49.312172 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:49.314635 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-06-01 05:15:48 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-06-01 05:15:48 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-06-01 05:15:48 incremental backup from 0 is enabled.\n[00] 2025-06-01 05:15:48 uses posix_fadvise().\n[00] 2025-06-01 05:15:48 cd to /var/lib/mysql/\n[00] 2025-06-01 05:15:48 open files limit requested 0, set to 1048576\n[00] 2025-06-01 05:15:48 mariabackup: using the following InnoDB configuration:\n[00] 2025-06-01 05:15:48 innodb_data_home_dir = \n[00] 2025-06-01 05:15:48 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-06-01 05:15:48 innodb_log_group_home_dir = ./\n[00] 2025-06-01 05:15:48 InnoDB: Using liburing\n2025-06-01 5:15:48 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-06-01 5:15:48 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-06-01 5:15:48 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250601 5:15:48 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x555c790623ae]\nmariabackup(handle_fatal_signal+0x229)[0x555c78b856d9]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x757f3c04c050]\nmariabackup(server_mysql_fetch_row+0x14)[0x555c787d1474]\nmariabackup(+0x76ca87)[0x555c787a3a87]\nmariabackup(+0x75f37a)[0x555c7879637a]\nmariabackup(main+0x163)[0x555c7873b053]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x757f3c03724a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x757f3c037305]\nmariabackup(_start+0x21)[0x555c78780161]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128063 128063 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E\n\nKernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-06-01 05:15:48 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-06-01 05:15:48 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-06-01 05:15:48 incremental backup from 0 is enabled.", "[00] 2025-06-01 05:15:48 uses posix_fadvise().", "[00] 2025-06-01 05:15:48 cd to /var/lib/mysql/", "[00] 2025-06-01 05:15:48 open files limit requested 0, set to 1048576", "[00] 2025-06-01 05:15:48 mariabackup: using the following InnoDB configuration:", "[00] 2025-06-01 05:15:48 innodb_data_home_dir = ", "[00] 2025-06-01 05:15:48 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-06-01 05:15:48 innodb_log_group_home_dir = ./", "[00] 2025-06-01 05:15:48 InnoDB: Using liburing", "2025-06-01 5:15:48 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-06-01 5:15:48 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-06-01 5:15:48 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250601 5:15:48 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x555c790623ae]", "mariabackup(handle_fatal_signal+0x229)[0x555c78b856d9]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x757f3c04c050]", "mariabackup(server_mysql_fetch_row+0x14)[0x555c787d1474]", "mariabackup(+0x76ca87)[0x555c787a3a87]", "mariabackup(+0x75f37a)[0x555c7879637a]", "mariabackup(main+0x163)[0x555c7873b053]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x757f3c03724a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x757f3c037305]", "mariabackup(_start+0x21)[0x555c78780161]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128063 128063 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E", "", "Kernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-06-01 05:15:49.487282 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 05:15:49.487390 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-01 05:15:49.488327 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 05:15:49.489031 | orchestrator | mariadb_bootstrap_restart 2025-06-01 05:15:49.576233 | orchestrator | 2025-06-01 05:15:49.576864 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 05:15:49.577184 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:49.578061 | orchestrator | 2025-06-01 05:15:49.579261 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 05:15:49.579635 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:49.580788 | orchestrator | 2025-06-01 05:15:49.581732 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 05:15:49.582382 | orchestrator | skipping: no hosts matched 2025-06-01 05:15:49.585310 | orchestrator | 2025-06-01 05:15:49.587293 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 05:15:49.588514 | orchestrator | 2025-06-01 05:15:49.592209 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 05:15:49.595103 | orchestrator | Sunday 01 June 2025 05:15:49 +0000 (0:00:04.748) 0:00:10.069 *********** 2025-06-01 05:15:49.795891 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:49.797099 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:49.801547 | orchestrator | 2025-06-01 05:15:49.801607 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 05:15:49.801623 | orchestrator | Sunday 01 June 2025 05:15:49 +0000 (0:00:00.220) 0:00:10.289 *********** 2025-06-01 05:15:49.955084 | orchestrator | skipping: [testbed-node-1] 2025-06-01 05:15:49.956233 | orchestrator | skipping: [testbed-node-2] 2025-06-01 05:15:49.957791 | orchestrator | 2025-06-01 05:15:49.959419 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 05:15:49.960230 | orchestrator | 2025-06-01 05:15:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 05:15:49.960354 | orchestrator | 2025-06-01 05:15:49 | INFO  | Please wait and do not abort execution. 2025-06-01 05:15:49.961487 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-01 05:15:49.962296 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 05:15:49.963227 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 05:15:49.963717 | orchestrator | 2025-06-01 05:15:49.964323 | orchestrator | 2025-06-01 05:15:49.964880 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 05:15:49.965312 | orchestrator | Sunday 01 June 2025 05:15:49 +0000 (0:00:00.160) 0:00:10.449 *********** 2025-06-01 05:15:49.965964 | orchestrator | =============================================================================== 2025-06-01 05:15:49.966451 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.75s 2025-06-01 05:15:49.966986 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.25s 2025-06-01 05:15:49.967364 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-06-01 05:15:49.967890 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-06-01 05:15:49.968291 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-06-01 05:15:49.969161 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-06-01 05:15:49.970215 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.22s 2025-06-01 05:15:49.971105 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.16s 2025-06-01 05:15:50.754679 | orchestrator | ERROR 2025-06-01 05:15:50.755071 | orchestrator | { 2025-06-01 05:15:50.755134 | orchestrator | "delta": "0:04:28.365517", 2025-06-01 05:15:50.755173 | orchestrator | "end": "2025-06-01 05:15:50.659217", 2025-06-01 05:15:50.755205 | orchestrator | "msg": "non-zero return code", 2025-06-01 05:15:50.755236 | orchestrator | "rc": 2, 2025-06-01 05:15:50.755267 | orchestrator | "start": "2025-06-01 05:11:22.293700" 2025-06-01 05:15:50.755296 | orchestrator | } failure 2025-06-01 05:15:50.785054 | 2025-06-01 05:15:50.785178 | PLAY RECAP 2025-06-01 05:15:50.785249 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-06-01 05:15:50.785289 | 2025-06-01 05:15:51.009356 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-01 05:15:51.010482 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-01 05:15:51.799385 | 2025-06-01 05:15:51.799574 | PLAY [Post output play] 2025-06-01 05:15:51.817662 | 2025-06-01 05:15:51.817842 | LOOP [stage-output : Register sources] 2025-06-01 05:15:51.894186 | 2025-06-01 05:15:51.894544 | TASK [stage-output : Check sudo] 2025-06-01 05:15:52.780696 | orchestrator | sudo: a password is required 2025-06-01 05:15:52.950185 | orchestrator | ok: Runtime: 0:00:00.017832 2025-06-01 05:15:52.964044 | 2025-06-01 05:15:52.964209 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-01 05:15:53.004153 | 2025-06-01 05:15:53.004453 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-01 05:15:53.083524 | orchestrator | ok 2025-06-01 05:15:53.092298 | 2025-06-01 05:15:53.092438 | LOOP [stage-output : Ensure target folders exist] 2025-06-01 05:15:53.571057 | orchestrator | ok: "docs" 2025-06-01 05:15:53.571560 | 2025-06-01 05:15:53.819098 | orchestrator | ok: "artifacts" 2025-06-01 05:15:54.085442 | orchestrator | ok: "logs" 2025-06-01 05:15:54.102897 | 2025-06-01 05:15:54.103082 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-01 05:15:54.150258 | 2025-06-01 05:15:54.150562 | TASK [stage-output : Make all log files readable] 2025-06-01 05:15:54.464128 | orchestrator | ok 2025-06-01 05:15:54.471175 | 2025-06-01 05:15:54.471301 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-01 05:15:54.506145 | orchestrator | skipping: Conditional result was False 2025-06-01 05:15:54.516562 | 2025-06-01 05:15:54.516704 | TASK [stage-output : Discover log files for compression] 2025-06-01 05:15:54.540544 | orchestrator | skipping: Conditional result was False 2025-06-01 05:15:54.551123 | 2025-06-01 05:15:54.551256 | LOOP [stage-output : Archive everything from logs] 2025-06-01 05:15:54.608767 | 2025-06-01 05:15:54.609013 | PLAY [Post cleanup play] 2025-06-01 05:15:54.621278 | 2025-06-01 05:15:54.621421 | TASK [Set cloud fact (Zuul deployment)] 2025-06-01 05:15:54.684989 | orchestrator | ok 2025-06-01 05:15:54.696981 | 2025-06-01 05:15:54.697143 | TASK [Set cloud fact (local deployment)] 2025-06-01 05:15:54.743864 | orchestrator | skipping: Conditional result was False 2025-06-01 05:15:54.760802 | 2025-06-01 05:15:54.760982 | TASK [Clean the cloud environment] 2025-06-01 05:15:55.402678 | orchestrator | 2025-06-01 05:15:55 - clean up servers 2025-06-01 05:15:56.161451 | orchestrator | 2025-06-01 05:15:56 - testbed-manager 2025-06-01 05:15:56.246887 | orchestrator | 2025-06-01 05:15:56 - testbed-node-2 2025-06-01 05:15:56.333440 | orchestrator | 2025-06-01 05:15:56 - testbed-node-0 2025-06-01 05:15:56.418487 | orchestrator | 2025-06-01 05:15:56 - testbed-node-4 2025-06-01 05:15:56.518627 | orchestrator | 2025-06-01 05:15:56 - testbed-node-3 2025-06-01 05:15:56.612019 | orchestrator | 2025-06-01 05:15:56 - testbed-node-1 2025-06-01 05:15:56.702245 | orchestrator | 2025-06-01 05:15:56 - testbed-node-5 2025-06-01 05:15:56.789514 | orchestrator | 2025-06-01 05:15:56 - clean up keypairs 2025-06-01 05:15:56.810557 | orchestrator | 2025-06-01 05:15:56 - testbed 2025-06-01 05:15:56.834129 | orchestrator | 2025-06-01 05:15:56 - wait for servers to be gone 2025-06-01 05:16:07.640211 | orchestrator | 2025-06-01 05:16:07 - clean up ports 2025-06-01 05:16:07.855880 | orchestrator | 2025-06-01 05:16:07 - 18b157c4-66cc-49a6-9c19-4209690e3423 2025-06-01 05:16:08.134431 | orchestrator | 2025-06-01 05:16:08 - 1b8320e0-2b13-45fe-b828-248153fcfc53 2025-06-01 05:16:08.810467 | orchestrator | 2025-06-01 05:16:08 - 233bab93-c8db-4d4c-aa6e-d9f75d0e8cf9 2025-06-01 05:16:09.013133 | orchestrator | 2025-06-01 05:16:09 - 250b1526-cec1-43f1-8f85-4c52ac0091b8 2025-06-01 05:16:09.309443 | orchestrator | 2025-06-01 05:16:09 - 9333ce4f-75ed-4d4e-9188-b877c6624712 2025-06-01 05:16:09.726966 | orchestrator | 2025-06-01 05:16:09 - b7c00bda-db82-44b8-801d-a3f2f963f1e4 2025-06-01 05:16:09.930360 | orchestrator | 2025-06-01 05:16:09 - f0016548-594a-469a-9bce-bb69a03694ca 2025-06-01 05:16:10.125215 | orchestrator | 2025-06-01 05:16:10 - clean up volumes 2025-06-01 05:16:10.251537 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-2-node-base 2025-06-01 05:16:10.295227 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-3-node-base 2025-06-01 05:16:10.336046 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-5-node-base 2025-06-01 05:16:10.385225 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-4-node-base 2025-06-01 05:16:10.427564 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-manager-base 2025-06-01 05:16:10.468307 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-0-node-base 2025-06-01 05:16:10.510439 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-1-node-base 2025-06-01 05:16:10.549716 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-8-node-5 2025-06-01 05:16:10.591289 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-1-node-4 2025-06-01 05:16:10.631177 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-0-node-3 2025-06-01 05:16:10.671819 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-3-node-3 2025-06-01 05:16:10.712615 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-6-node-3 2025-06-01 05:16:10.754448 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-5-node-5 2025-06-01 05:16:10.796714 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-2-node-5 2025-06-01 05:16:10.837034 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-7-node-4 2025-06-01 05:16:10.877278 | orchestrator | 2025-06-01 05:16:10 - testbed-volume-4-node-4 2025-06-01 05:16:10.917078 | orchestrator | 2025-06-01 05:16:10 - disconnect routers 2025-06-01 05:16:11.033343 | orchestrator | 2025-06-01 05:16:11 - testbed 2025-06-01 05:16:12.011388 | orchestrator | 2025-06-01 05:16:12 - clean up subnets 2025-06-01 05:16:12.065772 | orchestrator | 2025-06-01 05:16:12 - subnet-testbed-management 2025-06-01 05:16:12.236677 | orchestrator | 2025-06-01 05:16:12 - clean up networks 2025-06-01 05:16:12.504453 | orchestrator | 2025-06-01 05:16:12 - net-testbed-management 2025-06-01 05:16:12.805152 | orchestrator | 2025-06-01 05:16:12 - clean up security groups 2025-06-01 05:16:12.840181 | orchestrator | 2025-06-01 05:16:12 - testbed-management 2025-06-01 05:16:13.058370 | orchestrator | 2025-06-01 05:16:13 - testbed-node 2025-06-01 05:16:13.170149 | orchestrator | 2025-06-01 05:16:13 - clean up floating ips 2025-06-01 05:16:13.203898 | orchestrator | 2025-06-01 05:16:13 - 81.163.193.201 2025-06-01 05:16:13.577937 | orchestrator | 2025-06-01 05:16:13 - clean up routers 2025-06-01 05:16:13.677590 | orchestrator | 2025-06-01 05:16:13 - testbed 2025-06-01 05:16:15.321083 | orchestrator | ok: Runtime: 0:00:19.901815 2025-06-01 05:16:15.329525 | 2025-06-01 05:16:15.329656 | PLAY RECAP 2025-06-01 05:16:15.329767 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-01 05:16:15.329814 | 2025-06-01 05:16:15.472983 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-01 05:16:15.475466 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-01 05:16:16.225964 | 2025-06-01 05:16:16.226146 | PLAY [Cleanup play] 2025-06-01 05:16:16.242622 | 2025-06-01 05:16:16.242782 | TASK [Set cloud fact (Zuul deployment)] 2025-06-01 05:16:16.313507 | orchestrator | ok 2025-06-01 05:16:16.324285 | 2025-06-01 05:16:16.324440 | TASK [Set cloud fact (local deployment)] 2025-06-01 05:16:16.370580 | orchestrator | skipping: Conditional result was False 2025-06-01 05:16:16.386988 | 2025-06-01 05:16:16.387146 | TASK [Clean the cloud environment] 2025-06-01 05:16:17.587910 | orchestrator | 2025-06-01 05:16:17 - clean up servers 2025-06-01 05:16:18.069012 | orchestrator | 2025-06-01 05:16:18 - clean up keypairs 2025-06-01 05:16:18.084038 | orchestrator | 2025-06-01 05:16:18 - wait for servers to be gone 2025-06-01 05:16:18.129126 | orchestrator | 2025-06-01 05:16:18 - clean up ports 2025-06-01 05:16:18.211393 | orchestrator | 2025-06-01 05:16:18 - clean up volumes 2025-06-01 05:16:18.286188 | orchestrator | 2025-06-01 05:16:18 - disconnect routers 2025-06-01 05:16:18.318438 | orchestrator | 2025-06-01 05:16:18 - clean up subnets 2025-06-01 05:16:18.343237 | orchestrator | 2025-06-01 05:16:18 - clean up networks 2025-06-01 05:16:18.527241 | orchestrator | 2025-06-01 05:16:18 - clean up security groups 2025-06-01 05:16:18.558793 | orchestrator | 2025-06-01 05:16:18 - clean up floating ips 2025-06-01 05:16:18.582397 | orchestrator | 2025-06-01 05:16:18 - clean up routers 2025-06-01 05:16:18.945849 | orchestrator | ok: Runtime: 0:00:01.408157 2025-06-01 05:16:18.949807 | 2025-06-01 05:16:18.949985 | PLAY RECAP 2025-06-01 05:16:18.950119 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-01 05:16:18.950188 | 2025-06-01 05:16:19.086357 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-01 05:16:19.087433 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-01 05:16:19.884223 | 2025-06-01 05:16:19.884406 | PLAY [Base post-fetch] 2025-06-01 05:16:19.900472 | 2025-06-01 05:16:19.900637 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-01 05:16:19.956281 | orchestrator | skipping: Conditional result was False 2025-06-01 05:16:19.969073 | 2025-06-01 05:16:19.969259 | TASK [fetch-output : Set log path for single node] 2025-06-01 05:16:20.018528 | orchestrator | ok 2025-06-01 05:16:20.027692 | 2025-06-01 05:16:20.027893 | LOOP [fetch-output : Ensure local output dirs] 2025-06-01 05:16:20.555358 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/work/logs" 2025-06-01 05:16:20.836910 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/work/artifacts" 2025-06-01 05:16:21.100383 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/980ddd066ddd4088882f2d78fb6ced5e/work/docs" 2025-06-01 05:16:21.123940 | 2025-06-01 05:16:21.124152 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-01 05:16:22.077448 | orchestrator | changed: .d..t...... ./ 2025-06-01 05:16:22.077837 | orchestrator | changed: All items complete 2025-06-01 05:16:22.077896 | 2025-06-01 05:16:22.871140 | orchestrator | changed: .d..t...... ./ 2025-06-01 05:16:23.640872 | orchestrator | changed: .d..t...... ./ 2025-06-01 05:16:23.675376 | 2025-06-01 05:16:23.675554 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-01 05:16:23.722360 | orchestrator | skipping: Conditional result was False 2025-06-01 05:16:23.728640 | orchestrator | skipping: Conditional result was False 2025-06-01 05:16:23.754345 | 2025-06-01 05:16:23.754466 | PLAY RECAP 2025-06-01 05:16:23.754535 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-01 05:16:23.754570 | 2025-06-01 05:16:23.892232 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-01 05:16:23.894623 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-01 05:16:24.669868 | 2025-06-01 05:16:24.670035 | PLAY [Base post] 2025-06-01 05:16:24.684676 | 2025-06-01 05:16:24.684843 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-01 05:16:25.686315 | orchestrator | changed 2025-06-01 05:16:25.697144 | 2025-06-01 05:16:25.697275 | PLAY RECAP 2025-06-01 05:16:25.697350 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-01 05:16:25.697427 | 2025-06-01 05:16:25.826360 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-01 05:16:25.827419 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-01 05:16:26.638514 | 2025-06-01 05:16:26.638702 | PLAY [Base post-logs] 2025-06-01 05:16:26.651722 | 2025-06-01 05:16:26.651915 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-01 05:16:27.120692 | localhost | changed 2025-06-01 05:16:27.138544 | 2025-06-01 05:16:27.138808 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-01 05:16:27.177684 | localhost | ok 2025-06-01 05:16:27.184968 | 2025-06-01 05:16:27.185142 | TASK [Set zuul-log-path fact] 2025-06-01 05:16:27.203734 | localhost | ok 2025-06-01 05:16:27.218180 | 2025-06-01 05:16:27.218350 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-01 05:16:27.257620 | localhost | ok 2025-06-01 05:16:27.264810 | 2025-06-01 05:16:27.265013 | TASK [upload-logs : Create log directories] 2025-06-01 05:16:27.803361 | localhost | changed 2025-06-01 05:16:27.807550 | 2025-06-01 05:16:27.807693 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-01 05:16:28.334806 | localhost -> localhost | ok: Runtime: 0:00:00.005862 2025-06-01 05:16:28.343866 | 2025-06-01 05:16:28.344047 | TASK [upload-logs : Upload logs to log server] 2025-06-01 05:16:28.936123 | localhost | Output suppressed because no_log was given 2025-06-01 05:16:28.941208 | 2025-06-01 05:16:28.941405 | LOOP [upload-logs : Compress console log and json output] 2025-06-01 05:16:29.000210 | localhost | skipping: Conditional result was False 2025-06-01 05:16:29.005360 | localhost | skipping: Conditional result was False 2025-06-01 05:16:29.013913 | 2025-06-01 05:16:29.014172 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-01 05:16:29.062822 | localhost | skipping: Conditional result was False 2025-06-01 05:16:29.063498 | 2025-06-01 05:16:29.067091 | localhost | skipping: Conditional result was False 2025-06-01 05:16:29.080988 | 2025-06-01 05:16:29.081257 | LOOP [upload-logs : Upload console log and json output]